How AI Puts Elections at Risk — And the Needed Safeguards, Mekela Panditharatne and Noah Giansiracusa, Brennan Center for Justice, June 13, 2023
The article highlights the risks AI poses to election integrity, including the spread of disinformation through deepfakes and social media bots. It discusses how generative AI can create misleading content that undermines trust in the electoral process. The authors suggest safeguards, such as improving AI detection tools, enhancing regulations, and promoting transparency from AI developers and social media platforms. They emphasize the need for coordinated efforts between the government, private sector, and civil society to protect democracy from AI-driven threats.
See How Easily A.I. Chatbots Can Be Taught to Spew Disinformation, Jeremy White, The New York Times, May 19, 2024
Jeremy White's article illustrates how easily AI chatbots can be manipulated to spread disinformation. By customizing chatbots with social media posts from platforms like Reddit and Parler, the study revealed that these AI tools could produce convincing yet misleading responses on politically sensitive topics. The piece underscores the potential scale of AI-driven disinformation ahead of the U.S. presidential election, highlighting concerns from experts about the challenge of detecting AI-generated content and the need for robust safeguards to protect the integrity of online information.
How AI Will Transform the 2024 Elections, Darrell M. West, Brookings, May 3, 2023
Darrell M. West discusses how AI will reshape the 2024 elections, emphasizing AI-generated political messaging's impact on voters, politicians, and media. The article highlights AI's ability to provide instant responses, precisely target messages, and democratize disinformation, enabling anyone to create persuasive political content. With limited guardrails and disclosure requirements, the spread of AI-generated false information poses significant risks to election integrity. West urges for measures to address these challenges, emphasizing the need for transparency and regulation to protect democratic processes.
How AI Bots Could Sabotage 2024 Elections Around the World, Charlotte Hu, Scientific American, March 3, 2024
The article discusses the increasing threat of AI-generated disinformation in the 2024 elections. AI bots, equipped with generative AI, are expected to produce highly convincing fake content that can manipulate voters and influence election outcomes in over 50 countries. The article highlights the difficulty in detecting AI-generated text and the potential for widespread misinformation. Experts emphasize the need for improved AI detection tools and proactive measures by social media companies to counteract these disinformation campaigns.
The Rise of AI Fake News Is Creating a ‘Misinformation Superspreader’, Pranshu Verma, The Washington Post, December 17, 2023
Pranshu Verma's article in The Washington Post discusses the alarming rise of AI-generated fake news, which has grown over 1,000% since May 2023. With more than 600 websites now hosting AI-created false articles, the spread of misinformation about elections, wars, and natural disasters has intensified. AI's ability to automate and produce content that closely mimics real news poses significant challenges for discerning truth, especially in the lead-up to the 2024 elections. The article highlights examples like a fabricated story about Benjamin Netanyahu's psychiatrist, which spread across various media platforms. Experts emphasize the need for increased media literacy and better regulatory measures to combat this growing threat.
The U.S. Isn't Ready for AI-Fueled Disinformation—But China Is, Time, Nathan Beauchamp-Mustafaga and Bill Marcellino, October 5, 2023
The article discusses Chinese military researcher Li Bicheng's vision on utilizing AI for public opinion manipulation in favor of the Chinese Communist Party. With the advent of generative AI, creating a network of fake online personas to disseminate pro-Beijing content or misinformation becomes more scalable and cost-effective. The article mentions instances of AI-assisted social media manipulation and emphasizes the potential threat it poses to global democracies. It urges the US government and social media platforms to acknowledge this growing threat, suggesting a crackdown on inauthentic accounts and a reconsideration of export controls on advanced hardware to mitigate the risks posed by AI-driven information warfare.
Deepfakes in Slovakia Preview How AI Will Change the Face of Elections, Daniel Zuidijk, October 4, 2023
The article discusses the rise of AI deepfakes in political disinformation, illustrated by recent events in Slovakia's elections, where a fabricated conversation involving Progressive Slovakia's leader, Michal Simecka, circulated. AFP fact-checkers revealed this conversation, along with other deceptive recordings, were AI-generated. Despite the crude quality of the deepfake, it spread rapidly, highlighting the growing threat of such technology in political manipulation. With AI becoming cheaper and more accessible, the potential for misuse in disinformation campaigns escalates. The Slovak example serves as a forewarning for the global political landscape, emphasizing the need for stringent measures to mitigate such threats to democracy.
AI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype, Scientific American, Emily M. Bender and Alex Hanna, August 12, 2023
The article highlights the real and immediate harms posed by AI, including wrongful arrests, surveillance, and algorithmic discrimination, contrary to hyped existential threats. It criticizes the industry's fear-mongering over potential AI-led extinction, which diverts attention from current issues like wage theft and misinformation. With AI systems like ChatGPT, there's a risk of mistaking synthetic text for reliable information, exacerbating biases. The article argues for a policy focus on tangible AI harms, urging against the dystopian narrative, and calling for an examination of AI's societal impacts based on rigorous, reproducible research rather than speculative threats.