Skip to Main Content

ChatGPT and Generative AI Legal Research Guide

Societal Threats of Generative AI

The U.S. Isn't Ready for AI-Fueled Disinformation—But China Is, Time, Nathan Beauchamp-Mustafaga and Bill Marcellino, October 5, 2023

The article discusses Chinese military researcher Li Bicheng's vision on utilizing AI for public opinion manipulation in favor of the Chinese Communist Party. With the advent of generative AI, creating a network of fake online personas to disseminate pro-Beijing content or misinformation becomes more scalable and cost-effective. The article mentions instances of AI-assisted social media manipulation and emphasizes the potential threat it poses to global democracies. It urges the US government and social media platforms to acknowledge this growing threat, suggesting a crackdown on inauthentic accounts and a reconsideration of export controls on advanced hardware to mitigate the risks posed by AI-driven information warfare.

Deepfakes in Slovakia Preview How AI Will Change the Face of Elections, Daniel Zuidijk, October 4, 2023

The article discusses the rise of AI deepfakes in political disinformation, illustrated by recent events in Slovakia's elections, where a fabricated conversation involving Progressive Slovakia's leader, Michal Simecka, circulated. AFP fact-checkers revealed this conversation, along with other deceptive recordings, were AI-generated. Despite the crude quality of the deepfake, it spread rapidly, highlighting the growing threat of such technology in political manipulation. With AI becoming cheaper and more accessible, the potential for misuse in disinformation campaigns escalates. The Slovak example serves as a forewarning for the global political landscape, emphasizing the need for stringent measures to mitigate such threats to democracy.

AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype, Scientific American, Emily M. Bender and Alex Hanna, August 12, 2023

The article highlights the real and immediate harms posed by AI, including wrongful arrests, surveillance, and algorithmic discrimination, contrary to hyped existential threats. It criticizes the industry's fear-mongering over potential AI-led extinction, which diverts attention from current issues like wage theft and misinformation. With AI systems like ChatGPT, there's a risk of mistaking synthetic text for reliable information, exacerbating biases. The article argues for a policy focus on tangible AI harms, urging against the dystopian narrative, and calling for an examination of AI's societal impacts based on rigorous, reproducible research rather than speculative threats.