ChatGPT for Lawyers

In this article we will first talk about recent evolutions in the field of generative Artificial Intelligence (AI) in general, and about a new generation of chat bots. Then we focus on one particular one that is getting a lot of attention, i.e., ChatGPT. What is ChatGPT? What can it do, and what are the limits? Finally, we look at the relevance of ChatGPT for lawyers.

Introduction

We are witnessing the emergence of a new generation of chat bots that are more powerful than ever before. (We discussed legal chat bots before, here and here). Several of them excel in conversation. Some of these conversationalist chat bots recently made headlines on several occasions. In a first example, in December 2022, the DoNotPay chat bot renegotiated a customer’s contract with Comcast’s chat bot and managed to save 120 USD per year. (You read that correctly, two bots effectively renegotiating a contract). Shortly afterwards, a computer using a cloned voice of a customer was connected to the DoNotPay chat bot. A call was made to the support desk of a company and the speaking chat bot negotiated successfully with a live person for a reduction of a commercial penalty. The search engine You.com has added a conversation chat bot that allows people to ask a question and the reply is presented in a conversational format rather than a list of links. Microsoft has announced that its Bing search engine will start offering a conversational interface as well.

Conversationalist chat bots are a form of generative AI. Generative AI has made tremendous progress in other fields like the creation of digital artwork, or in filters and effects for all kinds of digital media, or in the generation of documents. These can be any documents: legal documents, blog or magazine articles, papers, programming code… Only days ago, the C-NET technology website revealed that they had started publishing articles since November 2022 that were entirely written by generative AI. Over a period of two months, they published 74 articles that were written by a bot, and the readers did not notice.

One chat bot in particular has been in the news on a nearly daily basis since it was launched in November 2022. Its name is ChatGPT and the underlying technology has also been used in some of the examples mentioned above.

What is ChatGPT?

ChatGPT stands for Chat Generative Pre-trained Transformer. The Wikipedia describes it as “a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge.”

In other words, it’s a very advanced chat bot that can carry a conversation. It remembers previous questions you asked and the answers it gave. Because it was trained on a large-scale database of texts, retrieved from the Internet, it can converse on a wide variety of topics. And because it was trained on natural language models, it is quite articulate.

What can it do and what are the limits?

Its primary use probably is as a knowledge search engine. You can ask a question just like you ask a question in any search engine. But the feedback it gives does not consist of a series of links. Instead, it consults what it has scanned beforehand and provides you with a summary text containing the reply to the question you asked.

But it doesn’t stop there, as the examples we have already mentioned illustrate. You can ask it to write a paper or an article on a chosen topic. You can determine the tone and style of the output. Lecturers have used it to prepare lectures. Many users asked it to write poetry on topics of their choice. They could even ask it to write sonnets or limericks, and it obliged. And most of the time, with impressive results. It succeeds wonderfully well in carrying a philosophical discussion. Programmers have asked it to write program code, etc. It does a great job of describing existing artwork. In short, if the desired output is text-based, chances are ChatGPT can deliver. As one reporter remarked, the possibilities are endless.

There are of course limitations. If the data sets it learned from contained errors, false information, or biases, the system will inherit those. A reporter who asked ChatGPT to write a product review commented on how the writing style and the structure of the article were very professional, but that the content was largely wrong. Many of the specifications it gave were from the predecessor of the product it was asked to review. In other words, a review by a person who has the required knowledge is still needed.

Sometimes, it does not understand the question, and it needs to be rephrased. On the other hand, sometimes the answers are excessively verbose with little valuable content. (I guess that dataset contained speeches by politicians). There still are plenty of topics that it has no reliable knowledge of. When you ask it if it can give you some legal advice, it will tell you it is not qualified to do so. (But if you rephrase the question, you may get an answer anyway, and it may or may not be accurate). Some of the programming code appeared to be copied from sites used by developers, which would constitute a copyright infringement. And much of the suggested programming code turned out to be insufficiently secure. For those reasons, several sites like StackOverflow are banning replies that are generated by ChatGPT.

Several other concerns were also voiced. As the example of CNET shows, these new generative AI bots have the potential of eliminating the need for a human writer. ChatGPT can also write an entire full essay within seconds, making it easier for students to cheat or to avoid learning how to write properly. Another concern is the possible spread of misinformation. If you know enough sources of the dataset that the chatbot learns from, you could deliberately flood it with false information.

What is the Relevance of ChatGPT for Lawyers?

Lawyers have been using generative AI for a while. It has proven to be successful in drafting and reviewing contracts and other legal documents. Bots like DoNotPay, Lawdroid, and HelloDivorce are successfully assisting in legal matters on a daily basis. For these existing legal bots, ChatGPT can provide a user-friendly conversationalist interface that make them easier to use.

When it comes to ChatGPT itself, several lawyers have reported on their experiences and tests with the system. It turned out that it could mimic the work of lawyers with varying degrees of success. For some items, it did a great job. It, e.g., successfully wrote a draft renting agreement. And it did a good job at comparing different versions of a legal document and highlighting what the differences were. But in other tests, the information it provided was inaccurate or plain wrong, where it, e.g., confused different concepts.

And the concerns that apply to generative AI in general, also apply to ChatGPT. These include concerns about bias and discrimination, privacy and compliance with existing privacy and data protection regulation like the GDPR, fake news and misleading content. For ChatGPT, the issue of intellectual property rights was raised as well. The organization behind ChatGPT claims it never copies texts verbatim, but tests with programming code appear to show differently. (You can’t really paraphrase programming code).

Given the success and interest in ChatGPT, the usual question was raised whether AI will replace the need for lawyers. And the answer stays the same that, no, it won’t. At present, the results are often very impressive, but they are not reliable enough. Still, the progress that has been made shows that it will get better and better at performing some of the tasks that lawyers do. It is good at gathering information, at summarizing it and at comparing texts. And only days ago (13 January 2023) the American Bar Association announced that ChatGPT had successfully passed one of its bar exams on evidence. But lawyers are still needed when it comes to critical thinking or the elaborate application of legal principles.

Conclusion

A new generation of chat bots is showing us the future. Even though tremendous progress has been made, there are still many scenarios where they’re not perfect. Still, they are improving every single day. And while at present supervision is still needed to check the results, they can offer valuable assistance. As one lecturer put it, instead of spending a whole day preparing a lecture, he lets ChatGPT do the preparation for him and write a first draft. He then only needs one hour to review and correct it.

For lawyers, too, the same applies. The legal texts it generates can be a hit and miss, and supervision is needed. You could think of the current status where the chat bot is like a first- or second-year law student doing an internship. They can save you time, but you have to review what they’re doing and correct where necessary. Tom Martin from Lawdroid puts it as follows: “If lawyers frame Generative AI as a push button solution, then it will likely be deemed a failure because some shortcoming can be found with the output from someone’s point of view. On the other hand, if success is defined as productive collaboration, then expectations may be better aligned with Generative AI’s strengths.”

 

Sources: