What We're Reading

Lots of great content to highlight this week! In AI news, a new UNESCO report explores how generative AI can supercharge online gender-based violence and an op-ed by Jennifer Pahlka reminds us that a government AI hiring surge must be accompanying by a change in organizational culture. In cyber security, a global market shift in underway following domestic restrictions based on dominant Israeli firms. In social media regulation, the first round of DSA transparency reports is prompting thinking about the appropriate role of government in shaping online content. 

Artificial Intelligence: 

 Technology – Facilitated Gender- Based Violence in an Era of Generative AI | UNESCO

Social media is known to facilitate gender-based violence. This UNESCO report anticipates that generative AI stands to "supercharge" that problem, citing its ability to automate "hate speech, cyber harassment, misinformation, and impersonation," as well as deep fake images/video, AI generated synthetic histories and embedded biases. The authors conduct a series of demo prompt injection to illustrate potential harms and offers recommendations for steps that content generators, distributors, users, policymakers and civil society actors should take to mitigate risks. 

AI and Geopolitics - How Might AI Affect the Rise and Fall of Nations? | RAND

RAND envisions several different possible paths of AI's impacts on geopolitics and offers broad recommendations for how governments should engage to shape the trajectory of AI development. The recommendation that "governments [should] invest in establishing robust, publicly owned data sets for AI research or issue challenge grants that encourage socially beneficial uses for AI" is an interesting tack that deserves further consideration. 

Opinion - What causes such maddening bottlenecks in government? ‘Kludgeocracy.’ | The Washington Post

Amidst the recent Executive Order's call for the federal government to hire more AI expertise and the accompanying OMB guidance for federal agencies to proactively (and responsibly) integrate AI in their operations, this op-ed by the author of Recoding America is timely. An AI hiring push won't generate results unless the "red tape and misaligned gears [that] frequently stymie progress on even the most straightforward challenges" is overhauled. 

Cybersecurity: 

The Hacking Industry Faces the End of an Era | MIT Technology Review

In the wake of U.S. sanctions on the NSO Group, the Israeli Ministry of Defense imposed restrictions on Israeli companies' sales of digital surveillance products, initiating a market shift as former customers look elsewhere. The article also documents an increase in more authoritarian countries building their own in house hacking capacities to "insulate themselves from global variables like political strife and human rights criticism," as well as increased activity by Chinese commercial actors. 

Charting China’s Climb as a Leading Global Cyber Power | Recorded Future

Recorded Future issued a report on the maturation of Chinese state-sponsored cyber operations. "Chinese cyber-enabled economic espionage has shifted from broad intellectual property theft to a more targeted approach supporting specific strategic, economic, and geopolitical goals." The report also finds that in the past decade, China has placed "greater emphasis on hindering detection, attribution, and tracking efforts from governments, security companies, and targeted organizations." 

 Social media regulation 

Digital Services Act study: Risk management framework for online disinformation campaigns | The Directorate-General for Communications Networks, Content and Technology

The EU's Digital Service Act tasks 'very large' social media platforms to report on their efforts to address "systemic risks to civic discourse," including foreign disinformation (the first deadline for transparency reports was November 6). In that context, this report assesses the efforts of six platforms to address Kremlin-sponsored disinformation. It's a tough balancing act, however; this Tech Policy Press piece expresses concern that "the Commission’s report can lead to excessive and unintended restrictions on the dissemination of online content and, hence, excessively restrict freedom of expression." 

Perhaps YouTube Fixed Its Algorithm. It Did Not Fix its Extremism Problem | Tech Policy Press

Citing recent research that YouTube has changed "its recommendation algorithm [to reduce] the prominence of harmful content," this piece argues that the broader problem endures as long as YouTube hosts and profits from extremist content and misinformation.  

News Categories