What We're Reading

In the wake of last week's preliminary injunction from a federal district court limiting government communication with social media platforms, we're looking back through some older content about efforts to reign in the spread of misinformation on social media platforms – and what's at stake.  

Why Deplatforming Might Be Useless — Or Worse — When It Comes to Preventing Right-Wing Violence | New Magazine

Written in the aftermath of January 6, this bears repeating in light of the recent judicial decision: "It goes without saying that people often misunderstand the connection between private platforms and the First Amendment — in that there isn’t one. Private platforms can ban whom they want, for pretty much any reason they want." The Intelligencer goes on to warn about the unintended impact of deplatforming: it risks driving users to alternative, often encrypted and therefore harder to monitor, platforms where their sense of injustice mounts – fueling extremism.  

Social Media Companies Should Self-Regulate. Now | Harvard Business Review

Another article written in the wake of January 6, this article bears returning to now that we've looking at severe restrictions on the ability of government to coordinate with social media platforms in controlling to spread of misinformation. Unfortunately, the authors' prediction that "governments will inevitably get more engaged in oversight" hasn't born out. But their historical look at self-regulation is interesting (and also relevant to the AI debate), although one wonders if the prospect of government regulation of social media is by now an empty threat.  

How Facebook and Google fund global misinformation |  MIT Technology Review

This 2021 expose looks at how Facebook and Google's ad monetization policies drove financially motivated clickbait farms in developing countries, such that the UN found "Facebook had played a 'determining role' in the atrocities" in Myanmar against Rohingya. The uptake: monetization incentivizes content producers to post often misleading/inaccurate clickbait and puts unregulated social media companies in reactive whack-a-mole posture. 

This paper attempts to assess the efficacy of misinformation interventions to include: limiting spread of content by adjusting algorithms, nudge based techniques for users intending to share a post, and account banning.  

Social Media Misinformation and the Prevention of Political Instability and Mass Atrocities | Stimson Center

This research discusses the conditions in which social media misinformation becomes resonant – the focus is on conditions that are ripe for mass atrocities, but also offers a good summary of the socio-political and psychological dynamics that make people susceptible to misinformation.   

As a final note, amidst all the coverage of Threads rapid uptake amongst users, we wanted to flag this nugget in a Reuters article: "the company will not extend its existing fact-checking program to Threads, spokesperson Christine Pai said in an emailed statement on Thursday. This eliminates a distinguishing feature of how Meta has managed misinformation on its other apps." This does not bode well for how social media companies are thinking about misinformation in the lead-up to 2024. 

 

News Categories