Tag Archives: generative AI

Liability for AI errors

Who is responsible when AI makes errors? In the last year, we have seen several cases where AI companies were taken to court. Parents of someone who committed suicide sued for negligence. There also are defamation cases because generative AI generated a factually incorrect response. In one example, a radio host sued OpenAI because ChatGPT produced a summary falsely claiming he had embezzled funds. There also have been cases involving product liability and contractual liability, among others. So, in this article, we will explore several scenarios where liability for AI errors came into play. We look at the different types of liability and at how to mitigate the liability for AI errors.

Please note that this article is not meant to provide legal advice. It is merely a theoretical exploration.

Criminal vs civil liability

Liability for AI hallucinations is both a complex and a rapidly evolving legal area, with plenty of voids and grey areas. Cliffe Dekker Hofmyer rightfully refers to it as a legal minefield.

There have been cases based on civil and on criminal liability. The situation at present is that most AI-related liability falls under civil law. This is because the claims concern compensation for harm, violation of private rights, or disputes between private parties. In many cases, the courts ruled that people are warned about hallucinations and that they use AI at their own risk. But other cases have shown that companies can be held liable for chatbot errors, and that legal professionals can face sanctions for relying on AI-generated but fictitious information.

Criminal liability connected to generative AI is currently rare because AI models lack intent. But there are scenarios where criminal law can be triggered. (See below).

Let us have a look at different types of liability.

Types of liability for AI errors

Defamation and Reputation Harm

A first series of cases involves defamation and reputation harm. Chatbots can generate false statements about individuals or organisations, sometimes with great specificity and apparent authority. When these falsehoods cause reputational damage, defamation law becomes relevant.

Early cases such as Walters v. OpenAI – the radio host mentioned above – illustrate how courts are beginning to test whether AI developers can be held responsible for hallucinated statements that damage someone’s reputation. In this case, the court ruled in favour of OpenAI. The court argued that Walters couldn’t prove negligence or actual malice, and that OpenAI’s explicit warnings about hallucinations weighed against liability. Thus far, defamation cases have largely been dismissed on those grounds.

Negligence and Duty of Care

Some lawsuits allege that AI systems failed to exercise reasonable care in situations where foreseeable harm was possible. Think of incidents of as self-harm or of the AI giving dangerous instructions.

Cases like Raine v. OpenAI and suits against Character.ai argue that developers owed a duty to implement safeguards, detect crises, or issue proper warnings. The argument is that failure to do so contributed to severe harm or even death. These cases are presently (December 2025) ongoing, and the courts have not ruled yet.

Wrongful Death and Serious Psychological Harm

In several allegations, chatbots induced, worsened, or failed to de-escalate suicidal ideation. Thus far, all cases that were taken to court have been in the US. Families of victims argue that the systems were designed in ways that made such harm foreseeable.

This category overlaps with negligence but remains distinct. Wrongful-death statutes in the US create their own remedies and set higher standards for three key elements: proximate causation, foreseeability, and the duty to protect vulnerable users.

Misrepresentation, Bad Advice, and Professional Liability

Although a chatbot is not itself a licensed professional, users often treat it as one. When a model produces incorrect legal, medical, financial, or technical advice that leads to material harm, plaintiffs may frame the issue as negligent misrepresentation or unlicensed practice through automation.

In the Mata v. Avianca sanctions case, for example, lawyers relied on non-existent precedents that were fabricated by ChatGPT. The lawyers were given a fine. This case demonstrates how professional users may be held responsible.

The case also raises questions about whether the model provider shares liability. Thus far, they have escaped liability on the same grounds as mentioned before, i.e., that the user is explicitly warned that the AI may provide them with incorrect information.

Product Liability and Defective Design

Some lawsuits frame chatbots as consumer products with design defects, inadequate safety systems, or insufficient warnings. Under this theory, the output is seen not merely as “speech” but as behaviour of a product that must meet baseline safety expectations. Claims of failure to implement guardrails, insufficient content filtering, or design choices that make harmful outcomes foreseeable fall under this category.

Contractual Liability and Terms-of-Service Breaches

AI systems are governed by contractual agreements between the user and the provider. AI developers may face contract liability if they fail to deliver promised functionality, violate their own service terms, or misrepresent their product’s capabilities. However, companies often use contractual clauses to protect themselves. These protective clauses limit liability, require arbitration, or disclaim responsibility for AI outputs. These clauses become contentious when actual harm occurs.

Copyright Infringement

Quite a few court cases involve copyright infringement where authors / creators claim that training generative AI with their works without their permission constitutes a copyright infringement. There also is a chance that the AI will use parts of their works in its responses, or that responses will be generated using several different source materials that are copyright protected. So, yes, generative AI raises serious copyright concerns, both in training and in output generation.

Thus far, we have witnessed litigation by authors, visual artists, and music publishers. In some places, copyright law has special rules that can hold AI companies responsible even if they didn’t directly copy someone’s work. These are called “contributory” and “vicarious” liability – meaning you can be liable for helping someone else infringe copyright, or for benefiting from infringement that happens under your control.

Because copyright law allows courts to award set amounts of money as damages (without needing to prove actual financial harm), this is one of the biggest financial risks AI companies face.

The AI companies on the other hand claim that training an AI falls under the “fair use” doctrine.

Privacy, Data-Protection, and Intrusion Violations

Many lawsuits claim that AI systems collect, keep, use, or expose people’s personal information without their explicit permission. These cases involve breaking data privacy laws (like Europe’s GDPR), invading people’s privacy, or misusing sensitive information. For example, a lawsuit called Cousart v. OpenAI shows how companies can be sued simply for how they handle data during training – not just for what the AI says or does afterward.

Emotional, Cognitive, and Psychological Harm

New studies show that chatbots can change how people remember things, alter their beliefs, or cause them to become emotionally dependent. Some lawsuits claim that AI chatbots harm users through these psychological effects. Plaintiffs argue that companies either intentionally designed them this way or were careless in creating systems that make people dependent, reinforce false beliefs, or worsen existing mental health problems. We’ll likely see more of these cases as we learn more about how regular AI use affects people’s minds.

Regulatory and Compliance Liability

As governments create new laws specifically for AI, companies can get in trouble for not following rules about being transparent, allowing audits, and managing risks properly. This includes laws like the EU AI Act, the Digital Services Act, and special rules for industries like healthcare or finance. Regulators can impose fines, ban certain activities, or restrict how companies operate – even without anyone filing a lawsuit.

Emerging and Hybrid Theories

Because AI doesn’t fit neatly into existing legal categories, courts and legal experts are creating new mixed approaches to determine who’s responsible when something goes wrong. These include treating AI as if it’s acting on behalf of the company, applying free speech laws to AI-generated content, or creating entirely new legal responsibilities for how algorithms influence people. As judges handle more AI cases, these hybrid approaches may eventually become their own distinct areas of law.

How to mitigate liability for AI errors

The following four suggestions can help mitigate the risks of liability.

Implement human oversight: critical decisions should not be made solely by AI without human review.

Provide training for the users: train employees on the limitations of AI tools and the importance of verifying information.

Use technical safeguards: limit an AI’s access to sensitive data and implement technical solutions to check the accuracy of its outputs.

Conduct risk assessments: before deployment, assess the potential harms of AI use and develop governance and response procedures.

 

Sources:

Retrieval Augmented Generation

In previous articles, we talked about generative AI, its benefits, and the risks that it comes with. One such risks is the fact that generative AI can hallucinate. It also doesn’t have access to the information you keep professionally. Retrieval augmented generation (RAG) addresses both issues. In this article, we answer the following questions: What is retrieval augmented generation? What are the benefits? And how can you use retrieval augmented generation with Copilot & SharePoint?

What is retrieval augmented generation?

Wikipedia defines retrieval augmented generation (or RAG) as “a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG, LLMs do not respond to user queries until they refer to a specified set of documents. These documents supplement information from the LLM’s pre-existing training data. This allows LLMs to use domain-specific and/or updated information that is not available in the training data. For example, this helps LLM-based chatbots access internal company data or generate responses based on authoritative sources. RAG improves large language models (LLMs) by incorporating information retrieval before generating responses.”

In other words, RAG enhances large language models by connecting them to external knowledge sources. Instead of relying solely on the information the model learned during training, RAG first retrieves relevant documents or data from a database, or your knowledge base. It then uses that retrieved information to generate more accurate and up-to-date responses.

The basic idea is simple: when you ask a question, the system searches through a collection of documents (like company files, research papers, or websites) to find relevant information. Then it feeds both your question and those retrieved documents to the language model. The model uses this context to produce an answer that’s grounded in your specific data rather than just its own general training knowledge. So, those are the three steps of retrieval augmented generation:

  • Retrieval: When a user asks a question, the RAG system searches an external knowledge base (like a company’s specific documents) for relevant information.
  • Augmentation: The retrieved information is then added to the original prompt, creating an “augmented” request.
  • Generation: The large language model (LLM) then generates a response based on this augmented prompt, using the external data to provide a more specific and accurate answer.

This approach solves several common problems with standard LLMs. It reduces hallucinations because the model a) bases its answers on actual retrieved text, b) allows the system to access current information beyond the model’s training cutoff date, and c) lets you use domain-specific knowledge without having to retrain the entire model. RAG is particularly useful for applications like customer support systems that need company-specific information. It is also useful for research assistants that work with scientific literature, or in any scenario where you need accurate answers based on a particular knowledge base.

Now, when you start researching retrieval augmented generation, you will often encounter the terms pipes or pipelines. It refers to the processing steps that transform a user’s query into a final response. They’re essentially the workflow or data flow that connects different components of the RAG system. The “pipe” metaphor comes from Unix pipes, where data flows from one process to another.

Different RAG implementations can have varying pipeline architectures. Some are simple with just query, retrieve, and generate stages. Others are complex with multiple retrieval steps, feedback loops, or parallel processing paths.

What are the benefits?

RAG offers several benefits that make it attractive for real-world applications.

The fact that it offers access to current and specific information is perhaps the most obvious advantage. Since the model retrieves information from your own database or documents, it can work with data that’s a) more recent than its training cutoff or b) with highly specialized knowledge that wasn’t in its original training data. This means companies can get accurate answers about their latest policies, recent research papers, or proprietary information. Depending on how you set it up, for law firms it can have access to your legal documentation, your knowledge base, your case files and/or documents.

As mentioned in the introduction, reduced hallucinations are another major benefit. When language models generate answers purely from their training, they sometimes confidently state incorrect information. RAG grounds the model’s responses in actual retrieved documents. This makes it cite or base its answers on real sources rather than just making things up. The result is that its output is more reliable and trustworthy.

Another significant is cost-effectiveness. With RAG you don’t need to fine-tune or retrain large language models every time your information changes. Instead, you simply update your document database, and the RAG system will retrieve the new information. This is far cheaper and faster than retraining models. After all, that requires substantial computational resources and technical expertise.

RAG also addresses the issues of transparency and traceability because you can see which documents the system retrieved to answer a question. This makes it easier to verify answers, debug problems, and build trust with users who can check the sources themselves.

A final benefit is referred to as domain adaptability. It means that you can quickly deploy the same base model across different domains or use cases by simply swapping out the document collection it retrieves from. One model can serve medical applications, legal research, or customer support just by changing the underlying knowledge base.

Retrieval augmented generation with Copilot & SharePoint

Interesting for law firms who use Copilot and SharePoint is that Copilot can be used in combination with SharePoint to enable RAG responses. Microsoft has made this integration quite powerful.

How does it work? Microsoft 365 Copilot offers a retrieval API that allows developers to ground generative AI responses in organizational data stored in SharePoint, OneDrive, and Copilot connectors. This means you can build custom AI solutions that retrieve relevant text snippets from SharePoint without needing to replicate or re-index the data elsewhere. The API understands user context and intent, performs query transformations, and returns highly relevant results from your Microsoft 365 content.

This approach offers several advantages for RAG implementations. You don’t need to set up separate vector databases: You can skip the traditional RAG setup that involves embedding, chunking, and indexing documents. The API automatically respects existing access controls and governance policies. This ensures security and compliance. Additionally, you can combine SharePoint data with other Microsoft 365 and third-party sources to create richer, more comprehensive responses.

For personal experimentation

If you would like to first experiment on your own, you can try Google’s new Notebook LM, which implements RAG technology. It’s an AI-powered research and writing assistant that helps users summarize and understand information from uploaded sources or specific websites.

Sources:

 

AI Agents are the next big thing

In our previous article, we looked at legal technology predictions for 2025. Several experts predicted that AI agents would be the most important evolution. So, let’s have a closer look. In this article, we will answer the following questions, “what are AI agents? “, and “why are they important?”. We will also talk about AI agents in legal technology.

What are AI Agents?

An artificial intelligence (AI) agent is a software program that can autonomously interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals. So, it is a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools. AI agents may improve their performance with learning or acquiring knowledge.

IBM explains that “AI agents can encompass a wide range of functionalities beyond natural language processing including decision-making, problem-solving, interacting with external environments and executing actions. These agents can be deployed in various applications to solve complex tasks in various enterprise contexts from software design and IT automation to code-generation tools and conversational assistants. They use the advanced natural language processing techniques of large language models (LLMs) to comprehend and respond to user inputs step-by-step and determine when to call on external tools.”

Why are they important?

Some refer to agentic AI as the third wave of the AI revolution. The first wave was predictive analytics where AI could crunch large datasets to discover patterns and make predictions. The second wave was generative AI, that uses deep learning and large language models (LLM) that can perform natural language processing tasks. And now, the third wave consists of AI agents that can autonomously handle complex tasks.

Because they can autonomously handle complex tasks, and better than ever before, AI agents can change the way we work. One headline gives the example of an AI agent that can reduce programming from months to days. There already are E-commerce agents, sales and marketing agents, customer support agents, hospitality agents, as well as dynamic pricing systems, content recommendation systems, autonomous vehicles, and manufacturing robots, for example. And they all can do the work that was previously done by humans.

AI agents clearly offer several benefits. They can dramatically improve productivity, as they can handle complex tasks without human supervision or intervention. And because processes are automated, this also reduces the costs. AI agents can also be used to do research which in turn allows to make informed decisions. AI agents also lead to an improved customer experience because they can “personalize product recommendations, provide prompt responses, and innovate to improve customer engagement, conversion, and loyalty.”

But, as with any breakthroughs in AI, it is important the remain aware that there always is a dark side, too. Already there are warnings about ransomware AI agents, which work autonomously, and are far more sophisticated than their predecessors.

AI Agents in legal technology

For quite a while now, legal technology has been using bots that automate certain processes. In a way, AI agents are the next generation of bots. Many legal technology experts predicted that 2025 would be the year of the legal AI agents.

A selection of predictions on AI agents in legal technology

The National Law Review, also quoted in last month’s article, interviewed more than sixty experts on legal technology. Several of them talked about AI agents in legal technology. Here is a selection of quotes.

Gabe Teninbaum stated that “The biggest surprise in legal AI in 2025 will be the emergence of agentic AI—systems capable of taking autonomous, goal-driven actions within set parameters. These tools won’t just assist lawyers but will independently draft contracts, conduct negotiations, and even manage compliance, pushing the profession to redefine what it means to “practice law.”” And “by 2025, legal AI will shift from supporting tools to decision-making partners, with agentic systems managing tasks like compliance monitoring and preliminary dispute resolution. The surprise won’t be AI’s capability—it will be the speed at which clients demand its adoption.”

Nicola Shaver said, “Agentic AI, with the capability to automate legal workflows end-to-end, will become more prevalent in 2025, as will AI-enabled workflows generally. We will see a move away from the chatbot model to generative AI that is built into the systems where lawyers work and that mimics the way lawyers work, making it easier to adopt. Lawyers should expect to access custom apps for their legal practice areas in places like their document management or practice management systems and will adopt the tools that they like at a deeper level. In 2025, some lawyers will be using generative AI on a daily basis without even noticing it, since it will be an enabler of so many systems in the back end with less of the prompting burden sitting with end users.”

Tom Martin echoes a similar sentiment, calling Agentic AI “a transformative leap in the direct provision of legal services, driven by strengthening multimodal AI models, agentic capabilities, seamless machine-level orchestration, and evolving regulations governing AI-driven legal entities. This shift won’t just streamline existing workflows; it will redefine the way legal services are conceived, delivered, and experienced.”

Jon M. Garon observes that, “The potential for user-operated agents will grow exponentially as these apps create the power to automate calendaring, meeting coordination, note-taking, work-out buddies, and much more, becoming true personal assistants. Lawyers will need to be careful that the agents do not disclose personal or client data, but with that problem solved, these will grow into a significant new market. ”

Evan Shenkman explains it as follows: “Think about tools that can listen in on depositions, trials, or client intake meetings, and provide the attorney — in real-time — with AI-powered guidance and assistance (issue spotting, identifying inconsistencies or falsehoods, etc.) based on the tool’s prior review and analysis of the entire case file. Or tools that can continually review the case docket, and then unilaterally alert the attorney of what just happened, what now needs to be done, and include GenAI-created proposed drafts based on prior firm samples. These tools are already in the works and will be mainstream soon enough. ”

Benefits of AI Agents in legal technology

The benefits AI Agents will bring to the field of legal technology apply not only to lawyers, but to all legal service providers, including alternative legal service providers.

One of the obvious primary advantages of AI agents in the legal field is their ability to enhance efficiency and reduce costs. Bots have already been doing that to a certain extent by automating repetitive tasks such as document review, legal research, and contract analysis. AI agents are expected to take this process of automating tasks to a new level where entire workflows and more complex tasks will be handled by them as well. This will free up valuable time for attorneys to focus on more complex and strategic aspects of their work. This not only increases productivity but also reduces the likelihood of human error, leading to more accurate outcomes.

The capability to process and analyse large volumes of data at speeds is particularly beneficial in legal research: AI can quickly sift through case law, statutes, and regulations to provide relevant information and insights.

Another significant benefit is the improved client service. By providing real-time updates and centralized document management, these agents encourage better collaboration within legal teams. This leads to more cohesive workflows and ensures that all team members are informed and aligned. All of this contributes to enhancing the client experience. (Several experts, some of whom are quoted above, predict that client demand will be a major factor in the adoption of AI agents).

AI agents also support transfer learning, which enables them to apply knowledge gained in one context to new, related tasks. This reduces the need for extensive retraining and allows legal professionals to leverage AI capabilities across various areas of law.

 

Sources:

Microsoft CoPilot for Lawyers

Microsoft has started integrating generative AI in its products and services. In our previous article, we talked about SharePoint Syntex. In this article, we have a look at its much talked about CoPilot. What is CoPilot? What can it do? What are the benefits of CoPilot? And what are the benefits of Microsoft CoPilot for lawyers? Finally, we look at the availability of CoPilot.

What is CoPilot?

Microsoft Copilot is a new AI assistant that can help you with various tasks across Windows, Microsoft 365, Bing, and Edge. It is an AI-powered productivity tool that uses large language models (LLMs) and integrates your data, e.g., with the Microsoft Graph and Microsoft 365 apps and services. It can answer your questions, generate content, suggest actions, and more. Copilot provides real-time intelligent assistance, enabling users to enhance their creativity, productivity, and skills.

CoPilot is not just one product or service, and that has led to some confusion. Microsoft has made different versions of CoPilot available, depending on your needs and preferences. At present, there are three versions that are most relevant.

First, there is CoPilot in Windows. This is the basic version of CoPilot that comes with Windows 11. You can launch it by clicking on its icon on the taskbar or by pressing the Windows logo key + C. CoPilot in Windows can help you with common tasks such as searching the web, organizing your windows, and adjusting your PC settings. You can also ask CoPilot questions and get relevant answers fast. For example, you can ask “What is the capital of South Africa?” and CoPilot will show you the answer along with a map and a link to learn more. CoPilot in Windows is being rolled out gradually and will be available in both Windows 10 and 11.

Next, there is the Microsoft 365 CoPilot. Let me first point out that there is some inconsistency in the use of the name. If you have Microsoft 365, a version of CoPilot will work alongside popular Microsoft 365 apps such as Word, Excel, PowerPoint, Outlook, Teams, and more. But the name is typically used more specifically for the CoPilot version for enterprise users of Microsoft 365. This is the premium version of CoPilot that requires a license for Microsoft 365 E3 or Microsoft 365 E5, and a separate license for Microsoft 365 CoPilot. (Read: you will have to pay extra). You can use the Microsoft 365 CoPilot setup guide in the Microsoft 365 admin centre to assign the required licenses to users. You can use Microsoft 365 CoPilot, e.g., to generate summaries of long documents in Word, create charts from data in Excel, design slides from keywords in PowerPoint, schedule meetings from emails in Outlook, and collaborate with teammates from chats in Teams.

Finally, there is Bing Chat (which was just also renamed to CoPilot): This is an online version of CoPilot that uses Bing as the search engine. You can access Bing Chat by going to bing.com/chat or by clicking on the chat icon on the Bing homepage. “Bing Chat puts the power of AI into your online search”, is how Microsoft puts it.

What can it do?

You can use Bing Chat for various purposes such as travel planning, community organizing, comparison shopping, or anything you search for on the web. You can use Bing Chat, e.g., to find the best deals on flights and hotels, get recommendations for local attractions and restaurants, join or create groups for common interests or causes, compare prices and features of different products or services, or explore any topic you are curious about.

Microsoft 365 Copilot can assist you in creating, editing, and improving your documents, emails, presentations, and more. It can help you write faster, better, and more confidently by generating text, suggesting edits, providing feedback, and offering insights. You can use it to create documents, emails, presentations, reports, blogs, and more. It can suggest content, format, style, and grammar based on your data, the Microsoft Graph, the Microsoft 365 apps, and the web. It can even catch up on email threads by getting a summary of the conversation. It can also answer your questions and provide relevant information from trusted sources.

The latter also applies to Bing Chat and the version of CoPilot that comes with Windows. It is a wide purpose generative AI tool that can answer questions, write texts, program code, etc. It can transcribe meetings and summarize the discussion using simple language. It can generate text and images based on your prompts and topics. It can turn documents into presentations or vice versa.

What are the benefits of CoPilot?

Microsoft identifies several benefits CoPilot offers. It can help you save time and effort by automating tedious tasks and generating content faster. It can assist you in learning new skills and improve your writing by providing feedback and suggestions. It also helps you unleash your creativity and explore new possibilities by offering diverse and relevant ideas.

What are the benefits of Microsoft CoPilot for Lawyers?

More specifically for lawyers, Copilot offers the following benefits. It helps to research legal topics and find relevant information from reliable sources. It assists in drafting contracts, agreements, and other legal documents with accuracy and clarity. And it can help you communicate effectively with clients, colleagues, and judges by using appropriate tone and language.

Availability of CoPilot

There is a lot of uncertainty about the availability of the different versions of CoPilot.

According to Microsoft, Copilot is currently available in the US, the UK, and select countries in Asia and South America. However, due to Europe’s privacy protection laws, Copilot is currently unavailable there. Microsoft aims to expand its availability beyond the initial regions, and is in negotiations with the EU.

Let us first have a look at the availability of CoPilot outside of the EU.

Since Microsoft Copilot will be integrated among different Microsoft products, the release dates differ.

  • Copilot started rolling out on Windows 11 on September 26 through a Windows 11 update.
  • Copilot began rolling out to Bing and Edge about two months ago.
  • Microsoft 365 Copilot began rolling out for enterprise customers on November 1 and will roll out to non-enterprise users at a later date. The enterprise version supports several languages, including English, Spanish, Japanese, French, German, Portuguese, Italian, and Chinese Simplified. More languages are planned to be supported over the first half of 2024.

Within the EU

Officially Microsoft 365 CoPilot for enterprise users is not yet available within the EU. However, several enterprise users who have their information hosted on Microsoft Azure servers within Europe have reported that Microsoft 365 CoPilot for enterprise users is available to them.

Copilot in Windows is in limited preview available in Europe, meaning that it is not fully functional and may have some bugs or errors. Copilot for Sales is also available in preview, meaning that it is still under development and may change over time.

Microsoft has stated that it will comply with both the EU and the UK data protection laws and will ensure that its customers can continue to use its services without disruption. Microsoft has also announced that it will offer a new option for its customers in the EU: the EU Data Boundary. This option will allow customers to choose to have their core customer data stored and processed within the EU only by the end of 2022. This option will cover Microsoft 365 CoPilot as well as other online services.

If you are interested in trying out Copilot in Europe, you may be able to bypass the regional restriction by running `microsoft-edge://?ux=copilot&tcp=1&source=taskbar` in the Run Command box. However, this may not work for all users and may violate the Digital Markets Act that disallows market monopoly. And you do so at your own risk.

 

 

Sources:

 

Generative AI

In a previous article, we talked about ChatGPT. It is a prime example of generative AI (artificial intelligence). In this article, we will explore generative ai a bit more in detail. We’ll answer questions like, “What is Generative AI?”, “Why is Generative AI important?”, “What can it do?”, “What are the downsides?”, and “What are the Generative AI applications for lawyers?”.

What is Generative AI?

A website dedicated to generative AI defines it as “the part of Artificial Intelligence that can generate all kinds of data, including audio, code, images, text, simulations, 3D objects, videos, and so forth. It takes inspiration from existing data, but also generates new and unexpected outputs, breaking new ground in the world of product design, art, and many more.” (generativeai.net)

The definition Sabrina Ortiz gives on ZDNet is complementary: “All it refers to is AI algorithms that generate or create an output, such as text, photo, video, code, data, and 3D renderings, from data they are trained on. The premise of generative AI is to create content, as opposed to other forms of AI, which might be used for other purposes, such as analysing data or helping to control a self-driving car.” As such, Generative AI is a type of machine learning that is specifically designed to create (generate) content.

Two types of generative AI have been making headlines. There are programs that can create visual art, like Midjourney or DALL-E2. And there are applications like ChatGPT that can generate almost any desired text output and excels in conversation in natural language.

Why is Generative AI important?

Generative AI is still in its early stages and already it can perform impressive tasks. As it grows and becomes more powerful, it will fundamentally change the way we operate and live. Many experts agree it will have an impact that is at least as big as the introduction of the Internet. Just think of how much the Internet has become of our daily lives. Generative AI, too, is expected to become fully integrated into our lives. And it is expected to do so quickly. One expert predicts that on average we will have new and twice as powerful generative AI systems every 18 months. Only four months after ChatGPT 3.5 was released, on 14 March 2023, a new, more powerful, more accurate, and more sophisticated version 4.0 was released. The new version is a first step towards a multimodal generative AI, i.e., one that can work with several media simultaneously: text, graphics, video, audio. It can create output of over 25 000 words of text, which allows it to be more creative and collaborative. And it’s safer and faster.

Let us next have a look at what generative AI can already do, and what it will be able to do soon.

What can it do?

One of the first areas where generative AI was making major breakthroughs was to create visual art. Sabrina Ortiz explains, “Generative AI art is created by AI models that are trained on existing art. The model is trained on billions of images found across the internet. The model uses this data to learn styles of pictures and then uses this insight to generate new art when prompted by an individual through text.” These are five free AI art generators that you can try out for yourself:

We already know from our previous article that ChatGPT can create virtually any text output. It can write emails and other correspondence, papers, a range of legal documents including contracts, programming code, episodes of TV series, etc. It can assist in research, make summaries of text, describe artwork, etc.

More and more search engines are starting to use generative AI as well. Bing, DuckDuckGo, and You.com, e.g., all already have a chat interface. When you ask a question, you get an answer in natural language, instead of a list of URLs. Bing even gives the references that it based its feedback on. Google is expected to launch its own generative AI enabled search engine soon.

More specifically to programming, one of the major platforms for developers (GitHub) announced it now has an AI Copilot for Business which is an AI-powered developer tool that can write code, debug and give feedback on existing code. It can solve any issues it may detect in the code.

Google’s MusicLM already can write music upon request, and the new ChatGPT version 4 announced a similar offering, too. YouTube also has announced that it will start offering generative AI assistance for video creation.

Generative AI tools can be useful writing assistants. The article on g2.com, mentioned in the sources, lists 48 free writing assistants, though many of them use a freemium model. Writer’s block may soon be a thing of the past, as several of these writing assistants only need a key word to start producing a first draft. You even get to choose the writing style.

Generative AI can also accelerate scientific research and increase our knowledge. It can, e.g., lower healthcare costs and speed up drug development.

In Britain, a nightclub successfully organized a dance event where the DJ was an AI bot.

All existing chatbots can get an upgrade where they will become far better at natural language conversations. And generative AI integrated with the right customer processes will improve customer experience.

As you can see, even though we’re only at the beginning of the generative AI revolution, the possibilities are endless.

What are the downsides?

At present, generative AI tools are mostly tools that assist. The output needs to be supervised. Sometimes, ChatGPT, e.g., gives incorrect answers. Worse, it can just make things up, and an experiment with a legal chatbot discovered that the bot just started lying because it had concluded that that was the most effective way to get the desired end result. So, there are no guarantees that the produced output is correct. And the AI system does not care whether what it does is morally or legally acceptable. Extra safeguards will have to be built in, which is why there are several calls to regulate AI.

There also is an ongoing debate about intellectual property rights. If a program takes an existing image and merely applies one or more filters, does this infringe on the intellectual property of the original artist? Where do you draw the line? And who owns the copyright on what generative AI creates? If, e.g., a pharmaceutical company uses an AI tool to create a new drug, who can take a patent? Is it the pharmaceutical company, the company that created the AI tool, or the AI tool itself?

And as generative AI becomes better, it will transform the knowledge and creative marketplaces, which will inevitably lead to the loss of jobs.

Generative AI applications for lawyers

As a result of the quick progress in generative AI, existing legal chatbots are already being upgraded. A first improvement has to do with user convenience and user-friendliness because users can now interact with the bots through a natural language interface. The new generation of bots understand more and are also expected to become faster, safer, and more accurate. The new ChatGPT 4 scored in the 90th percentile for the bar exams, where ChatGPT 3 – only a few months earlier – barely passed some exams.

Virtual Legal Assistants (VLA) are getting more and more effective in:

  • Legal research
  • Drafting, reviewing, and summarizing legal documents: contracts, demand letters, discovery demands, nondisclosure agreements, employment agreements, etc.
  • Correspondence
  • Creative collaboration
  • Brainstorming, etc.

As mentioned before, at present these AI assistants are just that, i.e., assistants. They can create draft versions of legal documents, but those still need revision by an actual human lawyer. These VLAs still make errors. But at the same time, they can already considerably enhance productivity by saving you a lot of time. And they are getting better and better fast, as the example of the bar exams confirms.

 

Sources:

 

ChatGPT for Lawyers

In this article we will first talk about recent evolutions in the field of generative Artificial Intelligence (AI) in general, and about a new generation of chat bots. Then we focus on one particular one that is getting a lot of attention, i.e., ChatGPT. What is ChatGPT? What can it do, and what are the limits? Finally, we look at the relevance of ChatGPT for lawyers.

Introduction

We are witnessing the emergence of a new generation of chat bots that are more powerful than ever before. (We discussed legal chat bots before, here and here). Several of them excel in conversation. Some of these conversationalist chat bots recently made headlines on several occasions. In a first example, in December 2022, the DoNotPay chat bot renegotiated a customer’s contract with Comcast’s chat bot and managed to save 120 USD per year. (You read that correctly, two bots effectively renegotiating a contract). Shortly afterwards, a computer using a cloned voice of a customer was connected to the DoNotPay chat bot. A call was made to the support desk of a company and the speaking chat bot negotiated successfully with a live person for a reduction of a commercial penalty. The search engine You.com has added a conversation chat bot that allows people to ask a question and the reply is presented in a conversational format rather than a list of links. Microsoft has announced that its Bing search engine will start offering a conversational interface as well.

Conversationalist chat bots are a form of generative AI. Generative AI has made tremendous progress in other fields like the creation of digital artwork, or in filters and effects for all kinds of digital media, or in the generation of documents. These can be any documents: legal documents, blog or magazine articles, papers, programming code… Only days ago, the C-NET technology website revealed that they had started publishing articles since November 2022 that were entirely written by generative AI. Over a period of two months, they published 74 articles that were written by a bot, and the readers did not notice.

One chat bot in particular has been in the news on a nearly daily basis since it was launched in November 2022. Its name is ChatGPT and the underlying technology has also been used in some of the examples mentioned above.

What is ChatGPT?

ChatGPT stands for Chat Generative Pre-trained Transformer. The Wikipedia describes it as “a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge.”

In other words, it’s a very advanced chat bot that can carry a conversation. It remembers previous questions you asked and the answers it gave. Because it was trained on a large-scale database of texts, retrieved from the Internet, it can converse on a wide variety of topics. And because it was trained on natural language models, it is quite articulate.

What can it do and what are the limits?

Its primary use probably is as a knowledge search engine. You can ask a question just like you ask a question in any search engine. But the feedback it gives does not consist of a series of links. Instead, it consults what it has scanned beforehand and provides you with a summary text containing the reply to the question you asked.

But it doesn’t stop there, as the examples we have already mentioned illustrate. You can ask it to write a paper or an article on a chosen topic. You can determine the tone and style of the output. Lecturers have used it to prepare lectures. Many users asked it to write poetry on topics of their choice. They could even ask it to write sonnets or limericks, and it obliged. And most of the time, with impressive results. It succeeds wonderfully well in carrying a philosophical discussion. Programmers have asked it to write program code, etc. It does a great job of describing existing artwork. In short, if the desired output is text-based, chances are ChatGPT can deliver. As one reporter remarked, the possibilities are endless.

There are of course limitations. If the data sets it learned from contained errors, false information, or biases, the system will inherit those. A reporter who asked ChatGPT to write a product review commented on how the writing style and the structure of the article were very professional, but that the content was largely wrong. Many of the specifications it gave were from the predecessor of the product it was asked to review. In other words, a review by a person who has the required knowledge is still needed.

Sometimes, it does not understand the question, and it needs to be rephrased. On the other hand, sometimes the answers are excessively verbose with little valuable content. (I guess that dataset contained speeches by politicians). There still are plenty of topics that it has no reliable knowledge of. When you ask it if it can give you some legal advice, it will tell you it is not qualified to do so. (But if you rephrase the question, you may get an answer anyway, and it may or may not be accurate). Some of the programming code appeared to be copied from sites used by developers, which would constitute a copyright infringement. And much of the suggested programming code turned out to be insufficiently secure. For those reasons, several sites like StackOverflow are banning replies that are generated by ChatGPT.

Several other concerns were also voiced. As the example of CNET shows, these new generative AI bots have the potential of eliminating the need for a human writer. ChatGPT can also write an entire full essay within seconds, making it easier for students to cheat or to avoid learning how to write properly. Another concern is the possible spread of misinformation. If you know enough sources of the dataset that the chatbot learns from, you could deliberately flood it with false information.

What is the Relevance of ChatGPT for Lawyers?

Lawyers have been using generative AI for a while. It has proven to be successful in drafting and reviewing contracts and other legal documents. Bots like DoNotPay, Lawdroid, and HelloDivorce are successfully assisting in legal matters on a daily basis. For these existing legal bots, ChatGPT can provide a user-friendly conversationalist interface that make them easier to use.

When it comes to ChatGPT itself, several lawyers have reported on their experiences and tests with the system. It turned out that it could mimic the work of lawyers with varying degrees of success. For some items, it did a great job. It, e.g., successfully wrote a draft renting agreement. And it did a good job at comparing different versions of a legal document and highlighting what the differences were. But in other tests, the information it provided was inaccurate or plain wrong, where it, e.g., confused different concepts.

And the concerns that apply to generative AI in general, also apply to ChatGPT. These include concerns about bias and discrimination, privacy and compliance with existing privacy and data protection regulation like the GDPR, fake news and misleading content. For ChatGPT, the issue of intellectual property rights was raised as well. The organization behind ChatGPT claims it never copies texts verbatim, but tests with programming code appear to show differently. (You can’t really paraphrase programming code).

Given the success and interest in ChatGPT, the usual question was raised whether AI will replace the need for lawyers. And the answer stays the same that, no, it won’t. At present, the results are often very impressive, but they are not reliable enough. Still, the progress that has been made shows that it will get better and better at performing some of the tasks that lawyers do. It is good at gathering information, at summarizing it and at comparing texts. And only days ago (13 January 2023) the American Bar Association announced that ChatGPT had successfully passed one of its bar exams on evidence. But lawyers are still needed when it comes to critical thinking or the elaborate application of legal principles.

Conclusion

A new generation of chat bots is showing us the future. Even though tremendous progress has been made, there are still many scenarios where they’re not perfect. Still, they are improving every single day. And while at present supervision is still needed to check the results, they can offer valuable assistance. As one lecturer put it, instead of spending a whole day preparing a lecture, he lets ChatGPT do the preparation for him and write a first draft. He then only needs one hour to review and correct it.

For lawyers, too, the same applies. The legal texts it generates can be a hit and miss, and supervision is needed. You could think of the current status where the chat bot is like a first- or second-year law student doing an internship. They can save you time, but you have to review what they’re doing and correct where necessary. Tom Martin from Lawdroid puts it as follows: “If lawyers frame Generative AI as a push button solution, then it will likely be deemed a failure because some shortcoming can be found with the output from someone’s point of view. On the other hand, if success is defined as productive collaboration, then expectations may be better aligned with Generative AI’s strengths.”

 

Sources: