What We're Reading

This week, we're highlighting several articles that shine a light on the incentives underlying tech development and the surrounding ecosystem – and challenge us to envision alternatives. 

Democracy Threatens Big Tech’s Business Model | TechPolicy.Press 

This piece is a clarion call to resist tech defeatism: "surveillance, tracking, and manipulation are not inherent to technology." It frames a choice between democracy and BigTech's destructive business models and urges collective action across democracies, researchers, and civil society in pursuing tech policy based on core democratic principles. 

GPT-4 Developer Tool can Hack Websites Without Human Help | New Scientist

Last week, we highlighted David Hickton's op-ed addressing recent cyberattacks against government entities in Pennsylvania. This article showcases a development we knew was coming, but is ominous nonetheless – AI powered hacking, with minimal human involvement. 

New Technologies Require Thinking More Expansively About Protecting Workers | TechPolicy.Press  

Another great piece from Tech Policy Press, this article explores harms to workers powered by mass data collection. It urges expanded regulatory efforts, noting that "addressing algorithmic harms will require acknowledging the unprecedented scale and pace at which worker data extraction and tech-driven power asymmetries have become central to many business models." 

Top AI researchers say OpenAI, Meta and More Hinder Independent Evaluations | The Washington Post

Academic researchers published an open letter highlighting concerns that AI companies' safety guardrails and policies are having a "chilling effect" on independent research aimed at probing the safety of models. This dynamic isn't new (social media platforms have long navigating questions of API access for external researchers) but drives home the importance of robust external oversight of increasingly powerful models. 

How We Can Control AI | Wall Street Journal

Eric Schmidt warns how "upcoming technologies could break our current approach to safety in AI." Schmidt proposes a market based solution, with government certified AI testing companies working to out innovate one another – a compelling argument, although it leaves open the question of whether human are capable of designing safety tests for systems that exceed human intelligence. 

Accenture CEO Julie Sweet Shares Why Her Firm is Acquiring Udacity to Launch an AI-powered Training Platform | Fortune

There's broad consensus that up-skilling and re-skilling for AI is critical, but it remains less clear what that training would actually look like (more on this to come from Pitt Cyber). The gap is pretty striking ("Accenture reports, 94% of workers say they want to learn new skills to work with generative AI, but only 5% of organizations provide gen AI training at scale") so we'll be watching this space. 

News Categories