What We're Reading

Lots happening in the cyber/tech policy space as well as here at Pitt Cyber! The U.S. and UK retaliated against Chinese malware attack targeting critical infrastructure, NTIA released a detailed set of policy recommendations to promote AI accountability and safety, DeepMind released a paper probing model dangerous capabilities, and the debate over open source AI continues. 

Attack from Within: How Disinformation is Sabotaging America

This week, we hosted Barbara McQuade, former U.S. attorney for the Eastern District of Michigan, to discuss her new book "Attack from Within: How Disinformation Is Sabotaging America." It contains great insights on how our misinformation ecosystem undermines the basic tenets of law and order. 

U.S. and Britain Accuse China of Cyberespionage Campaign | The New York Times 

Big cyber news last week came in the form of coordinated sanctions from DC and London, in response to Chinese sponsored malware attacks against electoral systems and other critical infrastructure systems. National security agencies have been warning about this trend for a while, but the sanctions bring it more into public view. On the same topic, this recent episode of Politico Tech features an interview with CISA Director Jen Easterly – and provides a great overview of state-based cyber threats against U.S. entities. 

The End of Foreign-Language Education | The Atlantic

This new paper from Google DeepMind undertakes to "lay the groundwork for a rigorous science of dangerous capability," considering four categories: persuasion and deception; cyber-security; self-proliferation; and self-reasoning. (It's worth noting that the testing was done on the models without safety filters, with the logic that the "goal is to evaluate the models’ underlying capabilities rather than product safety.")  

AI Accountability Policy Report | National Telecommunications and Information Administration

This report just out from NTIA (full report here) makes a commendable effort towards laying out policies that would promote the overarching goal of AI accountability and safety. The recommendations touch on the need to increase access to information (thereby overcoming the black box effect), establishing processes for model evaluation (to include audits and red teaming), and incentivizing responsible behavior with clearly articulated consequences. 

The End of Foreign-Language Education |The Atlantic

AI has shown remarkable abilities in the translation field. But as this article reflects, in a way that will resonate with anyone who has invested in foreign language learning: "Something enormous will be lost in exchange for that convenience. Studies have suggested that language shapes the way people interpret reality. Learning a different way to speak, read, and write helps people discover new ways to see the world—experts I spoke with likened it to discovering a new way to think. No machine can replace such a profoundly human experience … As the technology becomes normalized, we may find that we’ve allowed deep human connections to be replaced by communication that’s technically proficient but ultimately hollow." 

Civil-Society-Letter-on-Openness-for-NTIA-Process-March-25-2024 (cdt.org) 

A coalition of internet freedom minded organizations published a letter sent to the Department of Commerce in support of open source AI foundation models. They argue that OS models will advance competition and innovation, increase transparency and audit-ability, and, in so doing, support AI safety. They stress the need to focus on marginal risk – with the majority of risk resulting from poorly understood, hard-to-test models regardless of whether they be open or close source. 

Opinion | The Deepfake Porn of Kids and Celebrities That Gets Millions of Views | The New York Times 

This opinion piece in the NYT tells the story of the some of the brave women fighting to create a legal framework to make deepfake porn illegal. Nick Kristoff isn't a tech writer, but his question gets the heart of the matter: "We have a hard-fought consensus established today that unwanted kissing, groping and demeaning comments are unacceptable, so how is this other form of violation given a pass? How can we care so little about protecting women and girls from online degradation?" 

Reddit’s I.P.O. Is a Content Moderation Success Story | The New York Times

On the occasion of Reddit's IPO, this NYTimes article reflects on how Reddit is an example of content moderation genuinely improving the user experience. 

News Categories