Skip to Main Content

ChatGPT and Generative AI Legal Research Guide

Defamation by AI Generated Content

Talk Radio Host Challenges Bid to Erase ChatGPT Libel Suit, Law360, Matt Perez, September 11, 2023

Radio host Mark Walters refutes OpenAI's request to dismiss his defamation suit over false information shared by ChatGPT, stating that the warnings about potential inaccuracies don't exempt OpenAI from defamation claims. Citing past cases, he argued that disclaimers don't negate libel. Walters, uninvolved in the referenced Second Amendment Foundation case, was falsely identified by ChatGPT as a participant with criminal accusations. OpenAI, in its dismissal request, claimed that ChatGPT doesn't "publish" responses, hence can't defame. This case underscores the legal complexities arising as AI technologies like ChatGPT are increasingly utilized in the legal sector, and the responsibilities of AI developers regarding the accuracy of generated content.

What Can You Do When A.I. Lies About You?, New York Times, Tiffany Hsu, August 8, 2023

Marietje Schaake, a reputable professional, was falsely labeled as a terrorist by Meta's BlenderBot 3, highlighting AI's misinformation issue. The incident raises concerns as AI misidentifications can severely damage reputations with little recourse for the affected individuals. Despite updates and improvements in AI technology, fabricated information continues to be a significant problem. Some have sought legal action against AI companies for defamation, but legal frameworks regarding AI misinformation remain scarce. Efforts are underway to enhance AI safety, accuracy, and to develop legislation addressing AI-generated misinformation, reflecting a growing awareness of the potential harms posed by AI inaccuracies.

Defamatory Bots and Section 230: Navigating Liability in the Age of Artificial Intelligence, JD Supra, Christopher MacColl, July 18, 2023

The advent of AI technologies like chatbots raises legal concerns regarding misinformation liability for tech companies, traditionally protected under Section 230. However, this law may not shield AI developers from liability for their products' outputs. An ongoing case where radio host Mark Walters sued OpenAI over defamation by ChatGPT highlights these issues. The case raises critical questions: will Section 230 cover AI tools for false information originating from user inputs, will the data training AI products be seen as "information provided by another," and will there be amendments to Section 230 due to AI advancements? The case also touches on whether defamation’s “negligence” requirement will protect AI companies and how effectively companies' terms of use will mitigate litigation risks. This situation underscores the legal intricacies surrounding AI's role in propagating misinformation.

ChatGPT Faces World’s First Defamation Lawsuit in Australia, Independent, April 6, 2023

According to the article, OpenAI’s ChatGPT is facing a lawsuit in Australia after a regional mayor accused it of sharing false claims about him. Victorian Mayor Brian Hood claims the AI chatbot was telling users that he had served time in prison as a result of a foreign bribery scandal. If filed, this would be the first defamation lawsuit against an AI.

Can AI Commit Libel? We're About to Find Out, TechCrunch, April 6, 2023

A lawsuit filed in Australia against OpenAI alleges that its language model, ChatGPT, can be held liable for defamation. The lawsuit claims that ChatGPT generated defamatory content about the plaintiff, and that OpenAI is responsible for that content because it created and marketed the product. The case is the first of its kind to test whether AI can be held liable for defamation. If the plaintiff is successful, it could set a precedent for other lawsuits alleging that AI has defamed someone.

ChatGPT Falsely Accuses La Prof of Sexual Harassment; Is Libel Suit Possible?, ABA Journal, April 6, 2023

According to an article on ABA Journal, ChatGPT falsely accused a law professor of sexual harassment during a class trip to Alaska sponsored by his law school, Georgetown University Law Center. The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article. The professor was surprised to hear that he had been accused of sexual harassment. The tendency for LLMs to generate fake facts and fake sources to back it up can have devastating real-world consequences. It is unclear whether a libel suit is possible.

Ownership of Generative AI Copyright Issues

The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work, Michael M. Grynbaum and Ryan Mac, New York Times, Dec. 27, 2023 

The New York Times has filed a lawsuit against OpenAI and Microsoft, alleging that they used the newspaper's articles without permission to train ChatGPT and other AI models. The lawsuit focuses on verbatim reproduction of New York Times content by ChatGPT-4 and Microsoft's Copilot. The New York Times seeks damages for the unauthorized use of its journalism and requests the destruction of AI models trained on its articles. 

Opinion: The Copyright Office is Making a Mistake on AI-generated Art, ARS Technica, Timothy B. Lee, September 22, 2023

The US Copyright Office has reiterated its stance against copyrighting AI-generated art through recent rejections, sparking discussions on the evolving intersection of AI and copyright law. The refusal to register a copyright for the AI-created image, Théâtre D'opéra Spatial, which gained attention after winning an art contest, signifies the continuing challenge in defining authorship in the AI domain. Comparing to past legal battles over copyrighting photography, the article questions the "mechanical process" argument used to deny copyright to AI art. The evolving narrative suggests a need for a nuanced approach to copyright law that accommodates AI's role in creative processes, while the legal community grapples with the intricacies of authorship and originality in the digital age.

Artificial Intelligence and Copyright, A Notice by the Copyright Office, Library of Congress on 08/30/2023, Federal Register, August 30, 2023

The U.S. Copyright Office is initiating a review of the implications and concerns surrounding copyright law and policy in the realm of artificial intelligence (AI) systems. The Office is inviting feedback to guide its analysis and to evaluate whether there's a need for legislative or regulatory actions in this domain. The areas of focus include the utilization of copyrighted materials for training AI models, the desired degree of transparency and disclosure regarding the use of such materials, and the legal standing of creations generated by AI. The deadline for submitting written comments is 11:59 p.m. Eastern Time on October 18, 2023, and for written reply comments, it's 11:59 p.m. Eastern Time on November 15, 2023.

As Fight Over A.I. Artwork Unfolds, Judge Rejects Copyright Claim, NYT, Zachary Small, Aug. 21, 2023

A U.S. federal judge dismissed a bid to copyright AI-generated artwork last week, shedding light on the ongoing legal discourse around authorship and intellectual property in the AI era. Stephen Thaler, the inventor, had listed his computer system as the creator of the artwork, demanding the copyright be issued to him. Following multiple rejections by the U.S. Copyright Office, Thaler sued its director. The court upheld that copyright is reserved for works originating from humans, aligning with previous rulings and the Copyright Office's guidelines. The ruling, while significant, doesn’t preclude other AI-assisted artworks from being registrable, marking a critical discourse in the generative art domain.

IP Lawyer vs. ChatGPT: Top 10 Legal Issues of Using Generative AI at Work, Foley Insights, March 27, 2023

In this Foley & Lardner LLP article, it emphasizes the importance for organizations employing generative AI to be cognizant of the top legal concerns surrounding its use. The article outlines the ten most significant legal challenges organizations encounter when utilizing generative AI in the workplace. Intellectual property rights emerge as a primary legal concern. It underscores the need for organizations to recognize the potential for infringement and to proactively safeguard their intellectual property. Additional legal issues comprise data privacy and security, liability for infringement, and content enforcement.

Privacy Concerns About ChatGPT

ChatGPT Racking Up Privacy Complaints in EU, Regulators Asked to Investigate, CPO Magazine, April 17, 2023

According to the article, following concerns raised by European privacy watchdogs regarding the compliance of ChatGPT with the EU’s General Data Protection Regulation (GDPR), the European Data Protection Board (EDPB) is planning to establish a specialized task force to investigate ChatGPT. Additionally, Spain's data protection agency has requested the EU privacy watchdog to assess privacy concerns regarding OpenAI's ChatGPT. Italy has recently banned the use of ChatGPT, citing possible violations of EU privacy regulations.

ChatGPT: A Menace To Privacy, Above the Law, April 13, 2023

ChatGPT, an advanced language model, offers numerous applications while also presenting privacy concerns. It accumulates data from unspecified online sources and retains it indefinitely, which raises the possibility of tracking users' online behaviors and constructing profiles on them. Moreover, ChatGPT fails to provide a legal foundation for processing the personal data it acquires, and it does not allow users to invoke their "right to be forgotten" or "right to modify" personal information. Consequently, users need to remain vigilant about the privacy implications tied to utilizing ChatGPT.

ChatGPT Has a Big Privacy Problem: Italy's Recent Ban Of OpenAI's Generative Text Tool May Just Be The Beginning Of ChatGPT's Regulatory Woes, Wired, April 4, 2023

The use of ChatGPT has been temporarily prohibited in Italy due to concerns about its compliance with the General Data Protection Regulation (GDPR), as reported by CSHub. Italy is the first Western nation to enforce such a ban on the AI-powered chatbot ChatGPT. The Italian data protection authority has ordered the ban on ChatGPT, citing potential privacy infringements.

Copyright Infringement Issues About Training Data

Franzen, Grisham and Other Prominent Authors Sue OpenAI, NYT, Alexandra Alter and Elizabeth A. Harris, Sept. 20, 2023

Prominent novelists like John Grisham and Jonathan Franzen, alongside the Authors Guild, have filed a lawsuit against OpenAI, accusing it of copyright infringement by using their books to train ChatGPT chatbot. They claim that ChatGPT can produce “derivative works” resembling their books without compensation or notification, harming the market for authors. This lawsuit represents growing concerns within the creative industries about AI's impact. It highlights an ongoing debate on the intersection of AI, copyright law, and the potential for AI to disrupt traditional industries, reflecting broader uncertainties on how legal frameworks should adapt to emerging AI technologies.

'New York Times' Considers Legal Action Against OpenAI as Copyright Tensions Swirl, NPR, Bobby Allyn, August 16, 2023

A potential lawsuit from the Times against OpenAI could escalate concerns over copyright protection amidst the rise of generative AI like ChatGPT, which is seen as a competitor to the paper. The fear is exacerbated by tech firms integrating such AI in search engines, leading to less traffic for publishers. If OpenAI is found guilty of copyright infringement, it may face severe financial penalties and be required to destroy the infringing datasets. This lawsuit reflects a growing tension between AI companies and copyright holders, with legal experts anticipating a long battle ahead to define the boundaries of 'fair use' in AI-generated content.

Sarah Silverman Takes Legal Action: Sues Meta and OpenAI for Copyright Infringement, MSN, July 13, 2023

The article reports that comedian Sarah Silverman has taken legal action by suing Meta (formerly Facebook) and OpenAI for copyright infringement. The lawsuit alleges that Meta used an AI system created by OpenAI to generate a deepfake video featuring Silverman without her consent. The case raises questions about AI's impact on privacy, intellectual property rights, and the responsibility of tech companies in preventing the misuse of their AI technologies. Silverman's legal action highlights the need for clearer regulations and guidelines to address the ethical and legal challenges posed by AI-generated content in the entertainment industry.

Joseph Saveri Law Firm, LLP and Matthew Butterick File Class Action Against Meta Platforms, Inc. for Copyright Infringement, DMCA Violations, Negligence, Unlawful Competition, and Unjust Enrichment, West Virginia News, July 10, 2023

A class of authors have filed a class action lawsuit against Meta Platforms Inc. The lawsuit aims to address alleged intellectual property violations involving Meta using data to train its AI.

2 authors say OpenAI 'ingested' their books to train ChatGPT. Now they're suing, and a 'wave' of similar court cases may follow., Business Insider, July 9, 2023

OpenAI, the organization behind ChatGPT, is facing a copyright lawsuit from the authors of various books. The lawsuit alleges that ChatGPT was trained using copyrighted content without proper authorization, infringing upon the authors' rights. OpenAI has made efforts to use public domain data and comply with copyright laws, but the authors claim that proprietary text was also used in the training process. The outcome of this legal battle could have significant implications for AI language models and their use of copyrighted materials.

Legal Doomsday for Generative AI ChatGPT If Caught Plagiarizing or Infringing, Warns AI Ethics And AI Law, Forbes, February 23, 2023

In the article, an AI law and ethics expert cautions that generative AI like ChatGPT might face legal issues if involved in plagiarism or copyright infringement. The author expresses concern that ChatGPT could facilitate creation of copied content without proper attribution, translate copyrighted materials without permission, or produce new art or music infringing existing copyrights. The expert urges generative AI developers to implement preventive measures against such misuse and calls on users to be aware of potential legal risks when employing these technologies.

Legal Risks in General

The Legal Issues Presented by Generative AI, MIT Sloan, Dylan Walsh, August 28, 2023

The article discusses how generative artificial intelligence raises novel legal questions about data use and how content will be regulated. It highlights that generative AI tools are powerful new tools for individuals and businesses but also raise concerns about data privacy and security. The article also mentions that generative AI models consume huge amounts of data from all corners of the world, leading to questions of attribution. The author provides examples of ongoing lawsuits related to generative AI, including one brought by several coders against GitHub, Microsoft, and OpenAI centered on GitHub Copilot, which converts commands written in plain English into computer code in dozens of different coding languages. In another instance, several visual artists filed a class-action lawsuit against the companies that created the image generators Stable Diffusion, Midjourney, and DreamUp, all of which generate images based on text prompts from users. The case alleges that the AI tools violate copyrights by scraping images from the internet to train the AI models. In a separate lawsuit, Getty Images alleges that Stable Diffusion’s use of its to train models infringes on copyrights.

How Corporations Can Take On 'Supersized' Risk Of AI, Law360, May 17, 2023

Regina Jones, the chief legal officer of Baker Hughes, emphasizes the need for general counsel to understand and harness the power of artificial intelligence (AI) while being aware of its risks. Jones believes that AI can be both a tool for good and a substantive threat, so it is crucial for companies to have a plan to mitigate risks and ensure responsible AI use. She suggests that general counsel work closely with the chief information officer and other leaders to comprehend the use cases and risks associated with AI in their organizations. Assessing AI risk throughout the legal spectrum and involving third-party suppliers in the evaluation process are also important steps. Jones argues that companies should be active participants in establishing ethical rules and responsible AI development. Ryan McConnell, a legal professional, emphasizes the need to create a corporate governance model for AI risk, highlighting monitoring, leadership, risk management, training, and governance as essential components. Both experts stress the importance of preparing for AI's rapid advancement and potential risks.