Skip to Main Content

ChatGPT and Generative AI Legal Research Guide

Law School Policies About Using Generative AI for Papers and Exams

Law schools have a variety of policies about the use of AI for papers and exams. Some examples are listed below.

Fordham University School of Law

School of Law Memorandum on Academic Integrity

"This memorandum outlines our obligation to academic integrity as an aspect of professional responsibility and notes some important policies regarding final examinations and papers.

You will be required to certify your adherence to the following honor code for every examination:

By submitting this exam, I certify that I have not consulted, collaborated, or shared any information with anyone, nor have I utilized unauthorized materials, including any artificial intelligence or machine-learning tools, during this exam."

Georgia State University College of Law

Georgia State Law Honor Code

"A student may be charged with a violation of this Code if a student acts in a manner not otherwise directly covered in this Code that shows an intentional disregard for the ethical standards of the legal profession or the fundamental values of Georgia State College of Law community. Examples of this kind of conduct include, but are not limited to:

(1) Failing to indicate clearly to the instructor that one’s written work was submitted after the announced deadline for such submissions;

(2) Fabricating references or citations in any written work submitted for credit;

(3) Engaging in prohibited collaboration in course work;

(4) Using artificial intelligence (“AI”) models and applications, including but not limited to machine learning, deep learning, neural network, natural language processing or any predictive language models or applications, to complete an assignment or assessment unless the use of AI is specifically permitted by the course instructor. Legal research systems, word processing programs and their standard tools (e.g., spelling, grammar, and plagiarism checks) or automatic cite checkers — i.e. machine processes that do not create content but review student-created content for common errors or omissions — are not considered AI or an AI tool unless otherwise expressly stated by the instructor consistent with Section 3.2."

Mitchell Hamline School of Law

Student Conduct

“3.3 “Plagiarism” means the act of using words and ideas that are not one’s own and representing them as one’s own without proper attribution or credit. The use of another person or an artificial intelligence content-generator’s words or ideas must be given adequate documentation whether used in direct quotation or in summary or paraphrase. Plagiarism includes, but is not limited to, submitting the work of another or a content-generator as one’s own whether intentional or not.”

Harvard Law School

Harvard Law School Statement on Use of AI Large Language Models (like ChatGPT, Google Bard, and CastText’s CoCounsel) in Academic Work, including Exams

“Section V provides notice that students violating the School’s expectations regarding academic honesty in exams, papers, or other work will be subject to disciplinary action.

Pursuant to these policies, the use of AI large language models (such as ChatGPT), in preparing to write, or writing, academic work for courses, including papers and reaction papers, or in preparing to write, or writing, exams is prohibited unless expressly identified in writing by the instructor as an appropriate resource for the academic work or exam in the instructor’s course. Instructors permitting use of generative AI outputs may require students to disclose the generative AI outputs relied upon, and further show exactly how and where. If not expressly identified in writing by the instructor, any use of AI large language models will be considered academic dishonesty and not the student’s own work and will be subject to disciplinary action subject to sanctions in accordance with the Law School’s Administrative Board procedures and the Statement of the Administrative Board Concerning Sanctions for Academic Dishonesty.”

University of California Berkeley School of Law

Berkeley appears to be the first law school to adopt a formal policy on student use of generative artificial intelligence such as ChatGPT.

Spring 2023 Final Exam Procedures

"Generative AI is software, for example, ChatGPT, that can perform advanced processing of text at skill levels that at least appear similar to a human’s. Generative AI software is quickly being adopted in legal practice, and many internet services and ordinary programs will soon include generative AI software. At the same time, Generative AI presents risks to our shared pedagogical mission. For this reason, we adopt the following default rule, which enables some uses of Generative AI but also bans uses of  Generative AI that would be plagiaristic if Generative AI’s output had been composed by a human author. 

The class of generative AI software:

– May be used to perform research in ways similar to search engines such as Google, for correction of grammar, and for other functions attendant to completing an assignment. The software may not be used to compose any part of the submitted assignment.

– May not be used for any purpose in any exam situation.

– Never may be employed for a use that would constitute plagiarism if the generative AI source were a human or organizational author. For discussion of plagiarism, see https://gsi.berkeley.edu/gsi-guide-contents/academic-misconduct-intro/plagiarism/  

Instructors have discretion to deviate from the default rule, provided that they do so in writing and with appropriate notice."

University of California, Irvine School of Law

Academic Honor Code

Violations of the Code may include, but shall not be limited to, the following student acts or acts that a student reasonably should have known would benefit the student or assist another student in committing a violation: 

  1. Unauthorized materials: The use of any materials not expressly authorized by the instructor in an examination or other academic endeavor, when the student knew or should have known that such use was not expressly authorized. 
  2. Unauthorized use of electronic or software tools: Unauthorized use of any electronic or software tool, including but not limited to artificial intelligence-based tools.

University of Washington School of Law

Law School Honor Code

Academic misconduct includes:

  1. “Cheating” which includes, but is not limited to:

a. The use of unauthorized assistance in taking quizzes, tests, or examinations, or completing assignments;

b. The acquisition, use, or distribution of unpublished materials created by another student without the express permission of the original author(s);

c. Using online sources, such as solution manuals or artificial intelligence interfaces without the permission of the instructor to complete assignments, exams, tests, or quizzes; or

d. Requesting, hiring, or otherwise encouraging someone to take a course, exam, test, or complete assignments for a student."

Washburn University School of Law

Honor Code and Procedures for Law Students

"a. To facilitate the Washburn Law community’s interaction with these technologies in anticipation of a more longstanding policy on their use, Washburn Law adopts this interim policy:

  1. Students shall not use the output of Generative AI for any graded or required course work or co-curricular activities, unless approved by the instructor or faculty advisor (Faculty) in accordance with paragraphs ii. and iii.
  2. Faculty members may develop more specific terms and conditions for the use of Generative AI in their courses or the co-curricular activities they supervise. They may, for instance, allow students to use Generative AI tools for graded or ungraded course-work or school-related activities, but only under certain conditions, disclosures, or supervision. Students may also be required or advised to avoid or mitigate the risk of harmful or unlawful uses, such as generating outputs that are biased or discriminatory, constitute privacy infractions, risk plagiarism, or violate licensing restrictions. Faculty may also choose to allow the use of some Generative AI tools but not others.
  3. Where there is any uncertainty regarding permissible uses of Generative AI tools for school-related work, students must consult with the appropriate Faculty member before engaging in the activity.
  4. A student's knowing or reckless disregard of this policy may be considered academic impropriety and trigger an honor code investigation.

If a law student commits academic improprieties which are not discovered until after graduation, the student's graduation will not prevent prosecution for those improprieties. If, as a result of imposition of sanctions, the student no longer meets the requirements for graduation, the student's law degree will be withdrawn, as will any certifications to bar authorities."

 

Law School Policies About Using Generative AI for Admissions Essays

University of Michigan School of Law

Schools appears to be the first law school to adopt a formal policy against using generative AI for admissions essays.

Written Submissions: Personal Statement, Optional Essays, and Addenda:

"The University of Michigan Law School has long understood that enrolling students with a broad range of perspectives and experiences generates a vibrant culture of comprehensive debate and discussion. Written submissions are an extremely helpful tool for evaluating potential contributions to our community. Please note that for all written submissions, we expect that the work is the applicant’s own, meaning that the ideas and expressions originated with the applicant, and that the applicant wrote all drafts and the final product. Applicants ought not use ChatGPT or other artificial intelligence tools as part of their drafting process. Applicants may, however, ask pre-law advisors, mentors, friends, or others for basic proofreading assistance and general feedback and critiques."

Arizona State University College of Law

"The Sandra Day O’Connor College of Law at Arizona State University, ranked the nation’s most innovative university since 2016, announces that applicants to its degree programs are permitted to use generative artificial intelligence (AI) in the preparation of their application and certify that the information they submit is accurate, beginning in August 2023. 

The use of large language model (LLM) tools such as ChatGPT, Google Bard and others has accelerated in the past year. Its use is also prevalent in the legal field. In our mission to educate and prepare the next generation of lawyers and leaders, law schools also need to embrace the use of technology such as AI with a comprehensive approach. 

“Our law school is driven by an innovative mindset. By embracing emerging technologies, and teaching students the ethical responsibilities associated with technology, we will enhance legal education and break down barriers that may exist for prospective students. By incorporating generative AI into our curriculum, we prepare students for their future careers across all disciplines,” says Willard H. Pedrick Dean and Regents Professor of Law Stacy Leeds."