What We're Reading

This week we're diving into Jeff Horwitz's new book on Facebook's innerworkings (building on his previous WSJ reporting informed by whistleblower Frances Haugen), weighing the tradeoffs of monopolistic conditions in the AI industry, and reflecting on the so-called liar's dividend – and what it means for accountability. 

Jeff Horwitz's 'Broken Code: Inside Facebook and the Fights to Expose its Harmful Secrets' expands on the WSJ "Facebook Files." The book provides a powerful accounting on how Facebook's haphazard processes and poorly conceived metrics ended up fueling polarization, the promulgation of low quality content and distorted versions of reality. It's worth reading in full, but if you're looking for a recap, this review in Lawfare offers a good synopsis. 

Risky Analysis: Assessing and Improving AI Governance Tools | World Privacy Forum

An ambitious report from the World Privacy Forum inventories AI governance initiatives underway around the globe. With an eye towards evaluating the effectiveness of emergent AI governance tools, the report cautions that "when AI governance tools do not function well, they can exacerbate existing problems with AI systems." We're looking forward to digging into this further; Tech Policy Press' interview with co-author Kate Kaye is also worth a listen! 

AI Regulation May Make Monopolies Worse | Foreign Policy

This piece in Foreign Policy addresses concerns that AI regulations will favor large, well established players and disadvantage new market entrants. Realistically, BigTech has had a monopoly for some time – although extending those monopolistic traits to AI raises questions about whether promoting competition serves the broader public good. The articles offers some interesting suggestions – to include increased public sector investment in AI development – to address these concerns. But we're left wondering whether free market principles should be the driving force in steering AI. 

Deepfakes, Elections, and Shrinking the Liar’s Divided | The Brennan Center

Thoughtful essay from The Brennan Center on an externality that comes with increasing public awareness of deepfakes: the liar's dividend, wherein bad faith actors take advantage of an increasingly skeptical public to call the veracity of real content into question. Such an outcome undermines accountability. The essay goes on to discuss strategies to change the liar's calculus, to include provenance standards (technically challenging), efforts to bolster the public's trust discernment (vs skepticism), & expectation of backlash. 

Scammers Made Biden a deepfake. Here’s why It wasn’t any good | POLITCO Tech

In response to the deepfake calls deployed in the New Hampshire primary last week, this episode of Politico Tech looks at the technology – and linguistic 'tells' -- used to identify audio deepfakes. Acknowledging that this technology remains imperfect and that deepfakes continue to improve in quality, it's an interest look at reverse engineering audio deepfakes. 

News Categories