Skip to Main Content

ChatGPT and Generative AI Legal Research Guide

Ethical Issues in Legal Practice

Legal Guidance for Responsible AI and Ethics, LexBlog, Rob Scott, May 10, 2023.

This article emphasizes the importance of implementing AI responsibly and ethically, outlining the critical roles of legal professionals in guiding businesses through AI regulations and best practices. It covers legal support for transparency, explainability, fairness, non-discrimination, privacy, data protection, accountability, and governance. It also discusses stakeholder engagement and the necessity of legal expertise in ensuring AI technologies align with ethical standards and business values.

In Case of ‘Real Lawyers Against A Robot Lawyer,’ Federal Court Dismisses Law Firm’s Suit Against DoNotPay for Unauthorized Law Practice, Bob Ambrogi, LawSites, November 21, 2023  

A federal court in Illinois dismissed a lawsuit by MillerKing, a law firm, against the self-help legal service DoNotPay. MillerKing had sued DoNotPay for false association and advertising, claiming it misleads consumers with promises of legal services despite not being a licensed law firm. The court ruled that MillerKing lacked standing, as it failed to demonstrate any concrete injury, such as lost revenue or damaged reputation, due to DoNotPay's actions. The judge differentiated the two entities, noting MillerKing's traditional law practice contrasted with DoNotPay's web-based, AI-driven services. Other lawsuits against DoNotPay in California are ongoing, with one pending and another dismissed after being sent to arbitration. DoNotPay's founder, Joshua Browder, views the dismissal as a significant precedent for AI litigation. 

Here's An Analogy for AI Legal Ethics: Outsourcing, Law360, Daniel Connolly, September 9, 2023

At a Federal Bar Association panel, speakers likened using new AI tools in legal work to outsourcing tasks to humans, emphasizing the need for understanding and oversight. They referenced American Bar Association's legal ethics opinions on outsourcing, suggesting similar principles should apply to AI usage. While AI can enhance operational efficiency and speed, its implementation raises concerns around client confidentiality and potential misuse. The panelists highlighted the ironical use of AI like ChatGPT to understand AI's terms of service and suggested engaging lawyers in developing AI tools to ensure quality and ethical use, underscoring AI as a tool, its value determined by its application.

Beyond the Hype: Lessons on auditing AI systems from the front lines, ABA Journal, July 17, 2023

The article discusses the challenges and importance of auditing AI systems to ensure transparency, fairness, and accountability. It highlights the complexities of AI auditing, including the need for diverse expertise and cooperation among stakeholders. The author emphasizes the significance of learning from real-world experiences, sharing insights on common pitfalls and potential solutions. By detailing lessons from auditing AI systems, the article aims to guide organizations in responsibly deploying and managing AI technologies to foster trust and address ethical concerns in the rapidly evolving field of artificial intelligence.

New York State Bar Association Task Force To Address Emerging Policy Challenges Related to Artificial Intelligence, New York State Bar Association, July 17, 2023

The New York State Bar Association has formed a task force to address emerging policy challenges related to AI. The task force aims to explore the legal and ethical implications of AI's increasing use in various fields and develop guidelines to ensure responsible AI implementation. The focus is on maintaining privacy, avoiding bias, and promoting fairness while harnessing AI's potential benefits.

US Judge Orders Lawyers to Sign AI Pledge, Warning 'They Make Stuff Up', Reuters, May 31, 2023

A federal judge in Texas, U.S. District Judge Brantley Starr, has mandated that lawyers appearing before him must certify that they have not used artificial intelligence (AI) to draft their court filings without human oversight. This requirement, believed to be the first of its kind in the federal courts, aims to caution lawyers about the potential for AI tools to generate fictitious cases. If lawyers rely on AI-generated information without verifying it themselves, they may face sanctions. Judge Starr explained that while AI tools like ChatGPT are powerful and have applications in law, they should not be used for legal briefing due to their propensity for creating fabricated content and exhibiting bias. The judge's decision came after attending an artificial intelligence panel that showcased how AI platforms can create bogus cases. Although he considered banning the use of AI entirely, he ultimately decided to implement the certification requirement. The move follows a recent incident where a federal judge in Manhattan threatened sanctions against a lawyer for citing fictitious cases generated by ChatGPT.

Lawyer Facing Punishment Says He ‘Greatly Regrets’ Using ChatGPT In Lawsuit After AI Program Cited At Least 6 Nonexistent Cases, Hollywood Unlocked, May 31, 2023

Lawyer Steven A. Schwartz is potentially facing punishment after admitting to using an AI program called ChatGPT in a lawsuit filed on behalf of his client. The affidavit revealed that ChatGPT cited six fictitious cases in its research, which were used to support the client's claim against Avianca Airlines. The lawsuit alleged that the client was injured by a serving cart on a flight to New York. However, there were no real prior rulings to support their stance. Schwartz claimed that he had never used ChatGPT before and was unaware that its content could be false. He stated that he had no intention to deceive the courts or the defendants. Schwartz is now facing potential sanctions, and a hearing has been scheduled for next month. He pledged not to use ChatGPT again unless he can secure absolute verification of its claims.

A Lawyer used ChatGPT to Cite Bogus Cases. What are the Ethics?, Reuters, May 30, 2023

New York lawyer Steven Schwartz is facing sanctions after drafting an error-filled brief using AI language model ChatGPT, citing six non-existent court decisions. Schwartz, who expressed regret, is scheduled for a sanctions hearing on June 8. The incident has reignited concerns over the use of AI in legal practice. While the American Bar Association's ethics rules don't explicitly address AI, experts argue some rules apply. These include the Duty of Competence, requiring lawyers to provide competent representation and accurate information, and to not rely heavily on technology tools; the Duty of Confidentiality, preventing inadvertent or unauthorized client information disclosure; and Responsibilities Regarding Nonlawyer Assistance, necessitating supervision of non-human assistance to maintain professional conduct rules. Experts envision future rules requiring AI proficiency due to its potential transformative impact on legal practice.

Lawyers Breathe A Sigh Of Relief: They Can Turn Off Chat History For ChatGPT, Above the Law, May 2, 2023

According to the article, for lawyers, the new option to turn off chat history in ChatGPT is a significant development because it allows lawyers to use ChatGPT without worrying about their confidential data being saved or used to improve the program.

Is Generative AI Such As ChatGPT Going To Undermine The Famed Attorney-Client Privilege, Frets AI Law And AI Ethics, Forbes, March 30, 2023

According to the article, generative AI, like ChatGPT, can create realistic-looking text and images. This could potentially be used to undermine attorney-client privilege, which protects confidential communications between lawyers and their clients. For example, a third party could use generative AI to create a fake email that appears to be from a lawyer, in order to obtain confidential information from a client. Additionally, generative AI could be used to create fake documents that appear to be from a lawyer, which could be used to mislead or deceive a court. The legal system needs to take steps to address the potential risks posed by generative AI to attorney-client privilege. This could include developing new rules and regulations, or updating existing ones, to address the unique challenges posed by this technology.

Do Professional Ethics Rules Allow You to Have a Robot Write Your Brief?, Law.com, March 21, 2023

According to an article on Law.com, the American Bar Association’s Model Rules of Professional Conduct do not prohibit the use of artificial intelligence in legal work. However, the rules require that lawyers maintain competence in their work. The article discusses how AI can be used to write briefs and other legal documents. The article also mentions that the use of AI raises ethical questions about the role of lawyers and the quality of legal work.

ChatGPT – What are the Risks to Law Firms?, Legal IT Insider, March 14, 2023

The article explains that ChatGPT is a powerful AI chatbot that can be used for a variety of tasks in the legal profession. However, it poses risks such as confidentiality breaches, malpractice, ethical violations, and job losses. Law firms should be aware of these risks before using ChatGPT in their practices and take steps to mitigate them.

Artificial Intelligence Cannot Substitute for Actual Legal Intelligence, Law.com, March 13, 2023

This article discusses how artificial intelligence has been used in the legal profession for many years, but its capabilities have been limited. In recent years, however, AI has made significant progress and is now being used for a wider range of tasks, including contract review, legal research, and document drafting. AI could automate many of the tasks that are currently performed by lawyers, which could lead to job losses in the legal profession. However, AI cannot substitute for actual legal intelligence, and lawyers will still be needed to provide legal services.

Does ChatGPT Produce Fishy Briefs?, ABA Journal, February 21, 2023

This article examines the quality and reliability of ChatGPT-generated briefs and memos. It argues that while ChatGPT can produce coherent and persuasive texts, it can also make factual errors, logical fallacies and ethical violations. It advises lawyers to use ChatGPT with caution and supervision.

ChatGPT Is Impressive, But Can (and Should) It Be Used in Legal?, Legaltech News, December 15, 2022

This article explores the possibilities and challenges of using ChatGPT, an AI chatbot based on GPT-3.5, in the legal industry. It highlights some examples of how ChatGPT has been used for research, drafting and education purposes, but also warns about the ethical, legal and technical risks involved.