What We're Reading

Tops reads this week include AI plagiarism, the growth in ransomware attacks in 2023, a report by the UN High-level Advisory Body on Artificial Intelligence and contrasting examples of how AI might be used for good (advance forecasting of earthquakes) or ill (look no further than 4chan). 

How ransomware could cripple countries, not just companies | The Economist

The Economist argues that 2023 marks the worst year for ransomware attacks, measured both by volume and ransom payouts (just last week, the NYTimes reported on a ransomware attack against a technology service provider used by many museums, the latest of attacks targeting cultural groups.) Describing the problem as a "slow-burning but serious national-security crisis," this article provides a good state of play to include the evolving nature of the threat, deterrence, and geopolitics. 

Generative AI Has a Visual Plagiarism Problem | IEEE Spectrum

Plagiarism is a hot topic of late. Gary Marcus and Reid Southen write in IEEE Spectrum that generative AI is an unabashed culprit. 

Quantum computing is taking on its biggest challenge: noise | MIT Technology Review

AI has stolen the spotlight for the past year, but recent advantages in quantum technology remind us of the next potential game changer in the field of computing. 

How machine learning might unlock earthquake prediction | MIT Technology Review

Early warning systems detect the beginning of an earthquake, providing people with seconds or minutes of warning. But machine learning holds forth the prospect of earthquake prediction by "reveal[ing] hidden structures and causal links in what would otherwise look like a jumble of data." 

Licensing Frontier AI Development: Legal Considerations and Best Practice | Lawfare

The Responsible AI community has tossed around idea for an FDA for AI; Lawfare digs into how such a licensing regime could function. The article advocates for breaking down licensing requirements across the lifecycle stages of AI in order to ensure cyber and information security of AI models and standardize robust red teaming/evaluation procedures. 

Dark Corners of the Web Offer a Glimpse at A.I.’s Nefarious Future | The New York Times

Want to know how new technologies might be abused? Look no further than 4chan, home to AI powered revenge porn, hate speech generated with voice clones and LLMs stripped of their guardrails. (Last year, an LLM trained on the contents of the message board demoed the opposite of de-biasing: a model designed to echo conspiracy, racist, antisemitic and misogynist content). 

Governing AI for Humanity | AI Advisory Body

The UN High-level Advisory Body on Artificial Intelligence released its first report at the end of December in which it urged "global [AI] governance with equal participation of all member states ... to make resources accessible, make representation and oversight mechanisms broadly inclusive, ensure accountability for harms, and ensure that geopolitical competition does not drive irresponsible AI." 

 

News Categories