Ethical Issues Raised by Lawyers’ Use of ChatGPT

Featured image for “Ethical Issues Raised by Lawyers’ Use of ChatGPT”
Share:

At the last Austin Bench Bar conference, I presented with U.S. District Judge Robert Pitman on the ethical issues of using ChatGPT. This article summarizes some of the ethical issues raised in that discussion, along with some potential solutions that legal service providers are trying to use.

ChatGPT is a computer program operated by a company called OpenAI. It uses a chatbot to communicate with a “large language model” that has been trained on a significant portion of the text on the internet, along with books, case law, legal statutes, and other reference material. You can simply type, “Write 10 requests for production against a plaintiff in a personal injury case,” and the program will quickly generate 10 commonly used requests.

Pro se litigants will likely make extensive use of ChatGPT, as the resulting legal drafting is based on thousands of examples of lawyer work product, and is thus often indistinguishable from a lawyer’s draft. ChatGPT and similar programs will likely replace some paralegal and attorney work in initial legal drafting. This raises ethical questions about the unauthorized practice of law. On the flipside, it has substantial potential for improving access to justice.

But a major problem with ChatGPT is that it has “hallucinations”—it makes up cases that do not exist. ChatGPT generates new text based on many prior examples, and some of that “new text” can be made-up cases. Two lawyers in New York were sanctioned in June 2023 for submitting a brief written by ChatGPT with citations to non-existing cases.

A radio host in Georgia has sued OpenAI for defamation because ChatGPT made up a fake lawsuit against him for embezzling money. ChatGPT has a disclaimer explaining that its outputs are not always reliable.
U.S. District Judge Brantley Starr of the Northern District of Texas has started specifically requiring attorneys to certify that no portion of any filing was generated by an AI tool like ChatGPT, or that a human being has checked any AI-generated text. Judge Stephen Vaden of the U.S. Court of International Trade reportedly became the second judge to institute such a requirement.

Thus, lawyers using ChatGPT must navigate ethical questions of competence and candor to the tribunal. I have spoken with an unnamed legal vendor who is working on a ChatGPT product that limits case citations to a database of actual cases to address this problem. But given how ChatGPT fundamentally works (generating new text from a huge database of prior text without any true understanding), errors are likely to remain.

Another serious ethical concern with ChatGPT is the lawyer’s duty of confidentiality. Information provided to ChatGPT is stored and added to the “large language model” used to train ChatGPT even further. Thus, lawyers should not use privileged or otherwise confidential information in prompts for ChatGPT. However, the legal service provider I spoke with indicated they had an arrangement with OpenAI where their clients’ queries were not stored or used to train ChatGPT.

Thus, while ChatGPT raises various ethical concerns, lawyers are finding ways to incorporate it into their practice with caution. Stay tuned, as ChatGPT and other AI programs will have a powerful effect on the legal profession.