Tag Archives: artificial intelligence

Legal Technology Predictions for 2026

Sticking to the tradition of new year predictions, here is a selection of legal technology predictions for 2026. By now, AI has become ubiquitous. So, it shouldn’t come as a surprise that most authors spend plenty of time on AI-related predictions. We also discuss market-related predictions, as well as at predictions about the law firm and legal practice. As can be expected, these three categories overlap.

AI-related predictions

The rise of AI in the law firm seems unstoppable at present. All authors in the sources below give their own AI-related predictions. These are the common themes.

AI becomes the defining force in legal technology

AI is going to reshape legal technology in a big way. It is becoming a fundamental part of how law firms work. AI doesn’t just write things for you. It manages entire tasks from start to finish. It can sort through cases, pull together documents, and help you make decisions. Most experts mention the same trend: AI tools that can think through problems, make plans, and handle complex legal work on their own. Basically, we’re shifting from AI as an assistant to AI as a genuine work partner. AI is fundamentally changing how lawyers do their jobs. It assists in everything from case management to predicting outcomes to organizing workflows.

AI is also changing how clients and lawyers interact from the very first conversation. More and more clients are showing up with advice they’ve already gotten from an AI tool. As a result, law firms need their own AI systems to sort through and make sense of what clients bring to the table before a human lawyer even gets involved. The firms that have already woven AI into their daily operations are going to cash in on that head start. The ones still dragging their feet risk getting left behind.

The shift from Generative AI to Agentic AI

As mentioned above, the biggest tech shift coming in 2026 is how AI works. We’re moving away from the old approach where you ask AI a question and it gives you an answer. Instead, we’re getting AI agents that can handle entire projects from beginning to end. Right now, you might need to constantly tell AI what to do next just to draft a single document. But in 2026, these AI agents will work more like digital colleagues. They’ll sort through new cases on their own, dig up relevant documents, fact-check information, and put together organized reports. And they do all of this without you needing to hold their hand through every step. This kind of “deep research” ability means AI is graduating from being a glorified word processor to something that can orchestrate complicated legal work. As a result, lawyers can oversee entire workflows without getting bogged down in every little detail.

Data-driven strategy and predictive analytics

Law firms are starting to realize that all the data they’ve been collecting – billing records, case results, which clients are actually profitable – is worth its weight in gold. In 2026, using predictive analytics isn’t cutting-edge anymore; it’s just how things are done. Firms will tap into these insights to predict how judges might rule, get ahead of opposing counsel’s next move, and figure out which types of cases actually make them money.

Market-related legal technology predictions

Several authors pay attention to changes in the legal services market in 2026.

Market consolidation and shakeouts

The legal AI market is heading for major consolidation in 2026, with more than half of current providers expected to be acquired or shut down. Companies offering basic AI features without distinctive value will struggle, while survivors will be those providing specialized capabilities and measurable results. Major vendors are moving from standalone products to integrated platforms that connect across the legal ecosystem.

Meanwhile, traditional Big Law firms face growing competition from alternative legal service providers and AI-powered firms that can handle standardized work like contracts and compliance at lower costs. Some AI vendors may even begin offering legal services directly, blurring the line between technology and law firm. This efficiency revolution is forcing the entire industry to reimagine how legal services are structured and priced.

Evolution of Revenue Models and Billing

AI and automation are making legal work so much faster that the old billable hour model is starting to fall apart. By 2026, we’re likely to see more firms experimenting with alternative fee arrangements that involve mixed pricing approaches: maybe hourly rates for some work, flat fees for other projects, and even subscription-style arrangements based on results. Why? Well, when AI can knock out routine tasks in a fraction of the time, firms need to shift their focus to charging for the high-level thinking and judgment that only experienced lawyers can provide. But this creates a new challenge: firms will need much better financial tracking for each case to make sure they’re still profitable, while also giving clients the transparency and fair pricing they’re increasingly demanding.

Cybersecurity as a Competitive Differentiator

As law firms rely more heavily on AI tools and cloud-based systems, cybersecurity is something clients actively care about and ask for. By 2026, clients will expect “zero-trust” security setups and automatic encryption as basic requirements. They’ll choose firms that can prove they have strong vendor security and handle data safely. Firms with rock-solid security systems are already using that as a major selling point to win over high-value clients, and that trend is only going to accelerate.

Predictions about the law firm and legal practice

The remaining predictions largely have to do with the law firm and legal practice. They can be grouped in four categories.

The Shift to Unified Legal Platforms

The legal industry is moving away from using multiple disconnected tools toward unified cloud platforms that act as a single system of record. Instead of juggling separate software for billing, document management, HR, and client communication, law firms and in-house legal departments are adopting integrated environments that bring everything together in one place.

These unified platforms combine matter management, document automation, spend analytics, and risk tracking into a comprehensive solution. This consolidation creates a “single source of truth” that gives legal teams better visibility into case status, resource allocation, and spending patterns while eliminating the inefficiencies of managing siloed systems.

This shift is especially beneficial for small and mid-sized firms. Cloud-native platforms make enterprise-level automation and security accessible to firms of all sizes. As legal workflows become more automated and data-driven, these integrated platforms are democratizing access to sophisticated technology across the entire legal industry.

The Rise of the AI-Augmented Legal Professional

The role of the lawyer is evolving to require new technical skills. In 2026, successful lawyers will need to master AI tools and data analysis, not just traditional legal research. This means learning how to effectively prompt AI systems and oversee their outputs. This shift is changing how junior lawyers learn. Rather than spending years on repetitive tasks like document review, they’ll develop skills through AI-powered mentorship and practice simulations. The goal isn’t to replace lawyers with technology, but to free them from routine work so they can focus on what humans do best: strategic thinking, ethical judgment, and building client relationships. Technology handles the repetitive tasks while lawyers concentrate on the work that truly requires human expertise.

Strategic Imperatives: Adoption, ROI, and Skills

The key to success in 2026 isn’t just buying AI tools, it’s using them strategically. Law firms need to carefully select secure technology partners, redesign their workflows around AI, and establish clear metrics to track efficiency gains and client outcomes. Firms that started experimenting with AI in 2024-25 will pull ahead of their competitors by deploying these tools at scale, delivering faster service at lower costs. We’ve reached a point where those who waited will struggle to catch up.

Client Experience and Business Model Innovation

AI is transforming how law firms interact with clients and charge for their services. In 2026, firms are expected to use AI not just to work faster internally, but to provide clients with better transparency and responsiveness. They use tools like real-time dashboards and collaborative portals. And, as mentioned above, as AI handles routine tasks, traditional hourly billing will increasingly give way to fixed fees and subscription models. This shift responds to client demands for predictable, value-based pricing rather than simply paying for the time lawyers spend on a matter.

Conclusion

In 2026, legal technology will be characterized by mature AI systems, fewer but more powerful platforms, and fundamental changes in how law firms operate. Firms that adopt integrated cloud platforms, predictive tools, and advanced AI will outpace their competitors, while those that resist change risk becoming irrelevant. Every technology decision will need to prioritize security and client trust, and lawyers will increasingly need both legal expertise and tech skills. The legal industry has moved past the experimentation phase. Technology is now reshaping the profession’s core structure, transforming from a helpful tool into an essential strategic partner.

 

Sources:

Liability for AI errors

Who is responsible when AI makes errors? In the last year, we have seen several cases where AI companies were taken to court. Parents of someone who committed suicide sued for negligence. There also are defamation cases because generative AI generated a factually incorrect response. In one example, a radio host sued OpenAI because ChatGPT produced a summary falsely claiming he had embezzled funds. There also have been cases involving product liability and contractual liability, among others. So, in this article, we will explore several scenarios where liability for AI errors came into play. We look at the different types of liability and at how to mitigate the liability for AI errors.

Please note that this article is not meant to provide legal advice. It is merely a theoretical exploration.

Criminal vs civil liability

Liability for AI hallucinations is both a complex and a rapidly evolving legal area, with plenty of voids and grey areas. Cliffe Dekker Hofmyer rightfully refers to it as a legal minefield.

There have been cases based on civil and on criminal liability. The situation at present is that most AI-related liability falls under civil law. This is because the claims concern compensation for harm, violation of private rights, or disputes between private parties. In many cases, the courts ruled that people are warned about hallucinations and that they use AI at their own risk. But other cases have shown that companies can be held liable for chatbot errors, and that legal professionals can face sanctions for relying on AI-generated but fictitious information.

Criminal liability connected to generative AI is currently rare because AI models lack intent. But there are scenarios where criminal law can be triggered. (See below).

Let us have a look at different types of liability.

Types of liability for AI errors

Defamation and Reputation Harm

A first series of cases involves defamation and reputation harm. Chatbots can generate false statements about individuals or organisations, sometimes with great specificity and apparent authority. When these falsehoods cause reputational damage, defamation law becomes relevant.

Early cases such as Walters v. OpenAI – the radio host mentioned above – illustrate how courts are beginning to test whether AI developers can be held responsible for hallucinated statements that damage someone’s reputation. In this case, the court ruled in favour of OpenAI. The court argued that Walters couldn’t prove negligence or actual malice, and that OpenAI’s explicit warnings about hallucinations weighed against liability. Thus far, defamation cases have largely been dismissed on those grounds.

Negligence and Duty of Care

Some lawsuits allege that AI systems failed to exercise reasonable care in situations where foreseeable harm was possible. Think of incidents of as self-harm or of the AI giving dangerous instructions.

Cases like Raine v. OpenAI and suits against Character.ai argue that developers owed a duty to implement safeguards, detect crises, or issue proper warnings. The argument is that failure to do so contributed to severe harm or even death. These cases are presently (December 2025) ongoing, and the courts have not ruled yet.

Wrongful Death and Serious Psychological Harm

In several allegations, chatbots induced, worsened, or failed to de-escalate suicidal ideation. Thus far, all cases that were taken to court have been in the US. Families of victims argue that the systems were designed in ways that made such harm foreseeable.

This category overlaps with negligence but remains distinct. Wrongful-death statutes in the US create their own remedies and set higher standards for three key elements: proximate causation, foreseeability, and the duty to protect vulnerable users.

Misrepresentation, Bad Advice, and Professional Liability

Although a chatbot is not itself a licensed professional, users often treat it as one. When a model produces incorrect legal, medical, financial, or technical advice that leads to material harm, plaintiffs may frame the issue as negligent misrepresentation or unlicensed practice through automation.

In the Mata v. Avianca sanctions case, for example, lawyers relied on non-existent precedents that were fabricated by ChatGPT. The lawyers were given a fine. This case demonstrates how professional users may be held responsible.

The case also raises questions about whether the model provider shares liability. Thus far, they have escaped liability on the same grounds as mentioned before, i.e., that the user is explicitly warned that the AI may provide them with incorrect information.

Product Liability and Defective Design

Some lawsuits frame chatbots as consumer products with design defects, inadequate safety systems, or insufficient warnings. Under this theory, the output is seen not merely as “speech” but as behaviour of a product that must meet baseline safety expectations. Claims of failure to implement guardrails, insufficient content filtering, or design choices that make harmful outcomes foreseeable fall under this category.

Contractual Liability and Terms-of-Service Breaches

AI systems are governed by contractual agreements between the user and the provider. AI developers may face contract liability if they fail to deliver promised functionality, violate their own service terms, or misrepresent their product’s capabilities. However, companies often use contractual clauses to protect themselves. These protective clauses limit liability, require arbitration, or disclaim responsibility for AI outputs. These clauses become contentious when actual harm occurs.

Copyright Infringement

Quite a few court cases involve copyright infringement where authors / creators claim that training generative AI with their works without their permission constitutes a copyright infringement. There also is a chance that the AI will use parts of their works in its responses, or that responses will be generated using several different source materials that are copyright protected. So, yes, generative AI raises serious copyright concerns, both in training and in output generation.

Thus far, we have witnessed litigation by authors, visual artists, and music publishers. In some places, copyright law has special rules that can hold AI companies responsible even if they didn’t directly copy someone’s work. These are called “contributory” and “vicarious” liability – meaning you can be liable for helping someone else infringe copyright, or for benefiting from infringement that happens under your control.

Because copyright law allows courts to award set amounts of money as damages (without needing to prove actual financial harm), this is one of the biggest financial risks AI companies face.

The AI companies on the other hand claim that training an AI falls under the “fair use” doctrine.

Privacy, Data-Protection, and Intrusion Violations

Many lawsuits claim that AI systems collect, keep, use, or expose people’s personal information without their explicit permission. These cases involve breaking data privacy laws (like Europe’s GDPR), invading people’s privacy, or misusing sensitive information. For example, a lawsuit called Cousart v. OpenAI shows how companies can be sued simply for how they handle data during training – not just for what the AI says or does afterward.

Emotional, Cognitive, and Psychological Harm

New studies show that chatbots can change how people remember things, alter their beliefs, or cause them to become emotionally dependent. Some lawsuits claim that AI chatbots harm users through these psychological effects. Plaintiffs argue that companies either intentionally designed them this way or were careless in creating systems that make people dependent, reinforce false beliefs, or worsen existing mental health problems. We’ll likely see more of these cases as we learn more about how regular AI use affects people’s minds.

Regulatory and Compliance Liability

As governments create new laws specifically for AI, companies can get in trouble for not following rules about being transparent, allowing audits, and managing risks properly. This includes laws like the EU AI Act, the Digital Services Act, and special rules for industries like healthcare or finance. Regulators can impose fines, ban certain activities, or restrict how companies operate – even without anyone filing a lawsuit.

Emerging and Hybrid Theories

Because AI doesn’t fit neatly into existing legal categories, courts and legal experts are creating new mixed approaches to determine who’s responsible when something goes wrong. These include treating AI as if it’s acting on behalf of the company, applying free speech laws to AI-generated content, or creating entirely new legal responsibilities for how algorithms influence people. As judges handle more AI cases, these hybrid approaches may eventually become their own distinct areas of law.

How to mitigate liability for AI errors

The following four suggestions can help mitigate the risks of liability.

Implement human oversight: critical decisions should not be made solely by AI without human review.

Provide training for the users: train employees on the limitations of AI tools and the importance of verifying information.

Use technical safeguards: limit an AI’s access to sensitive data and implement technical solutions to check the accuracy of its outputs.

Conduct risk assessments: before deployment, assess the potential harms of AI use and develop governance and response procedures.

 

Sources:

Legal chatbots in 2025

We have in the past dedicated articles to legal chatbots in 2016 and 2019. It is time for an update. In this article, we discuss trends and adoption of legal chatbots, as well as existing regulation. Then we look at legal chatbots for consumers and legal chatbots for law firms. We will do so for the US (because they are still the market leaders), the UK, and the EU.

Trends and adoption

The US has seen a rapid growth of bots / AI agents in law firms: AI adoption in US law firms surged from 19% in 2023 to 79% in 2024, with chatbots playing a central role. This market expansion is still ongoing: the US legal tech market is projected to reach $32.54 billion by 2026, with chatbots as one of the main drivers.

In the UK, adoption is most advanced in large, business-to-business (B2B) law firms. Chatbots are integrated with legal analytics, project management, and contract management systems. In contrast, we find that the B2C market lags: the business-to-consumer market is slower to adopt. Legal chatbots are most popular in firms with large-scale, commoditized services. Chatbot and AI adoption is slowed down by a lack of awareness and uncertainty about the role of AI: over one-third of UK legal professionals remain uncertain about the application of generative AI and chatbots in legal work.

In the EU on the other hand, we are witnessing an increasing adoption. There is a steady rise in chatbot use for routine legal tasks, especially among consumers and SMEs. Chatbots are also seen as tools to improve access to justice, particularly for underserved populations and cross-border matters. At the same time, there are ongoing ethical and legal debates, as there are concerns about accuracy, liability, and bias in AI-generated legal advice.

Regulation

In recent years, there has been a move towards regulating the use of AI, which also affects the use of legal chatbots.  There is a need for transnational regulation, but thus far, each region just does its own thing.

In the US, we are confronted with fragmented regulation. The US lacks a comprehensive federal AI law. As a result, regulation is piecemeal: we are dealing with a) state-level initiatives and b) professional (ethical) conduct rules that guide how lawyers can use AI. When it comes to legal chatbots specifically, there is a requirement for professional oversight. In other words, chatbots cannot independently practice law, and human supervision is required to avoid unauthorized practice and to ensure accuracy. And of course, law firms must consider privacy and security when using legal bots. Compliance with privacy laws is essential, especially when handling sensitive client data.

It is worth noting that the FTC (Federal Trade Commission) has made clear that bots cannot market themselves as “robot lawyers” or a substitute for licensed counsel without substantiation. Its 2024 enforcement against DoNotPay (a consumer rights bot we discussed in previous articles) resulted in a $193,000 penalty and strict advertising restrictions. This FTC ruling is widely cited as the line in the sand for consumer legal AI claims.

Furthermore, the American Bar Association’s first formal opinion on generative AI (Formal Opinion 512, 2024) says lawyers must a) understand the capabilities and limits of AI, b) protect confidentiality, c) supervise outputs, and d) be candid with courts and clients. They do not need to be “AI experts,” but they can’t delegate professional judgment to a bot. Several bar associations and courts have issued similar guidance.

The UK relies on flexible, sector-specific laws and regulation, with a focus on transparency, explainability, and data protection (UK GDPR). Add to that, that legal professionals must ensure chatbots comply with professional ethical standards, including confidentiality and competence.

In the EU, we find regulation on both the EU level, as well as on the national level. On the EU level, the GDPR and the EU AI Act are the most important regulations. The GDPR has strict data privacy requirements which also apply to chatbot operations, especially with sensitive legal data. The EU AI Act introduces risk-based regulation, with high-risk applications (like legal advice) facing stricter requirements for transparency, accuracy, and human oversight.

Apart from the EU regulations, we also find that some National Bar Associations have issued their own regulations. As a result, in some countries only licensed lawyers can provide legal advice. This effectively limits the chatbot scope and/or requires professional supervision.

Legal chatbots for consumers

In previous articles on legal chatbots, we mainly discussed legal chatbots for consumers. What they all have in common is that they facilitate access to legal information. They democratize legal knowledge, making it more accessible to the public. (Links in the introduction). Overall, there still is a steady rise in chatbot use for routine legal tasks, especially among consumers and SMEs.

Legal chatbots for law firms

Apart from chatbots for consumers, in recent years we have also witnessed an increase in the number of legal chatbots for law firms. What are they used for?

  • Automation of routine tasks: chatbots automate legal research, contract review, and administrative work.
  • Document automation: bots are assisting lawyers with the creation and review of standard legal documents.
  • Legal research: AI chatbots can scan and summarize large volumes of legal documents and precedents rapidly.
  • Client engagement and intake: they are also used to handle initial queries, provide information, and schedule appointments, and they can direct clients to appropriate services or professionals.
  • Provide a better consumer experience: some law firms use their own legal chatbots to offer consumer services. By doing so, they enhance accessibility in areas like small claims, tenancy issues, and basic legal advice.

Conclusion

Legal chatbots have become an essential part of legal services in the US, UK, and Europe. Big law firms and routine legal services have been the quickest to adopt these technologies, but now we’re seeing more tools that help everyday people access legal help.

Regulatory frameworks are evolving rapidly, with the EU leading in comprehensive risk-based regulation, the UK favouring sector-specific guidance, and the US maintaining a fragmented, state-driven approach. Across all regions, the focus is on balancing innovation with ethical, professional, and data privacy safeguards.

At present, the US is still leading the way when it comes to legal chatbots. Most research/drafting bots originate in the U.S. (Thomson Reuters, Lexis, Harvey, Bloomberg). The UK on the other hand, is presenting itself as a contract-review hub: tools like Luminance and Robin AI grew out of the U.K.’s startup ecosystem. Continental European firms use a mix of U.S./U.K. platforms under GDPR controls, but also homegrown tools like ClauseBase and Legito for contract/document automation.

 

Sources:

 

Using AI for legal research

How safe is using AI for legal research? On the one hand, AI is making quick progress and keeps getting better. The arrival of a new generation of AI agents will only speed up that process. But on the other hand, we keep getting headlines where law firms are being fined for using AI that referred to non-existing legislation and jurisprudence. In this article, we look at a) how AI is reshaping legal research, b) at the risks and accuracy concerns of using AI in legal research, c) at possible mitigation strategies. Finally, d) we look at using AI for legal research on non-US law.

How is AI reshaping legal research – benefits

AI has been having a significant impact on legal research, and generative AI has certainly sped up that process. Many law firms are using generative AI to assist them with their legal research. It is easy and convenient, as they can ask questions in natural language, rather than having to study some query language. And now that most generative AIs have started offering more advanced research agents that can provide sources, AI has become even more attractive. So, AI is significantly reshaping legal research in several impactful ways. Most of those are beneficial.

One of the most noticeable changes is the enhanced speed and efficiency it brings. AI tools are capable of sifting through vast volumes of legal data in seconds, identifying relevant information much faster than a human could. This efficiency saves lawyers considerable time and resources.

Beyond speed, AI can also improve the accuracy and depth of insight in legal research. By analysing large datasets, AI can detect patterns and extract insights that might go unnoticed by human researchers. It can also flag potential errors or inconsistencies in legal documents, helping to ensure the accuracy and reliability of the information used. But caution is needed, as we will discuss below.

Another major advantage is the broader access to legal information that AI provides. These tools can draw from a wide array of sources, including statutes, case law, legal journals, and specialized databases. This comprehensive reach allows lawyers to develop a fuller understanding of the legal issues they face.

Natural Language Processing (NLP) and machine learning further enhance AI’s capabilities in the legal field. NLP enables AI to comprehend the meaning within legal texts. This allows it to extract key information and identify relevant precedents. Meanwhile, machine learning algorithms can analyse historical case data to predict outcomes. This gives lawyers valuable insights into the strengths and weaknesses of their cases.

AI is also increasingly being integrated into established legal research platforms. This integration improves the efficiency and comprehensiveness of legal research.

However, as AI becomes more embedded in legal practice, responsible usage is essential. Ensuring accuracy, upholding ethical standards, and maintaining regulatory compliance are critical. Lawyers must treat AI as a supportive tool rather than a standalone solution, and it remains vital to verify any information generated by AI systems. Because there are still considerable risks involved in using AI for legal research.

Risks and accuracy concerns of using AI in legal research

In a recent case in California, a judge found that nine out of the twenty-seven quoted sources were non-existent. The two law firms involved (one had delegated research to the other) were fined 31 000 USD. If you follow the news on legal AI, it is a common problem. Apart from that, AI still often is biased, too. Let’s have a closer look at both issues.

Accuracy concerns

AI systems can produce inaccurate, incomplete, or misleading legal information. This is particularly the case when dealing with complex cases, with nuanced legal concepts, or when legislation or jurisprudence has changed recently.

Even worse are AI “Hallucinations”. As witnessed in the example above, AI can generate plausible but factually incorrect information. It is therefore crucial to verify all AI-generated output against credible sources. The Californian example above highlights how this is a serious risk, as one in three sources that were quoted did not exist.

The example also illustrates the risk of reliance on AI without oversight. You cannot assume the AI knows what it’s doing. Over-reliance on AI without thorough human review can lead to errors that compromise case outcomes and erode client trust.

Bias and ethical concerns

In previous articles, we pointed out that AI inherits and reflects all the biases of the data pool that it was trained upon. This can lead to unfair or discriminatory legal outcomes. So, bias in AI algorithms is a first concern.

Many AI systems cannot explain how they reached their conclusions, or they fail to mention sources. Lack of transparency and accountability, therefore, is a second issue. The algorithms used by AI systems can be opaque, making it difficult to understand how decisions are made and hold the AI system accountable.

Clients may not fully understand the role of AI in their legal representation. This can easily undermine their trust. Clear communication is essential.

As with any online tool lawyers use that share client information, there are data privacy and confidentiality concerns.

Finally, there is the aspect of professional responsibility. Lawyers have a duty to supervise AI-generated work, ensuring it is accurate and ethical. They also must communicate with clients about the use of AI tools.

Mitigation strategies

It is possible to counteract these risks by implementing some mitigation strategies.

  • Always verify AI-generated results against credible legal databases and primary sources.
  • Actively oversee and review AI-generated work to ensure accuracy, as well as ethical compliance.
  • Be transparent with clients about the use of AI tools.
  • Implement robust data security measures to protect client information and comply with privacy regulations.
  • Adhere to ethical guidelines and professional responsibilities when using AI in legal practice.

What about using AI for legal research on non-US law?

Most of the advances in generative AI are being made in the US, and the EU is catching up. How well do the generative AI platforms perform when it comes to non-US law? And are they available in other languages?

Let’s start with the language question: all the major generative AI engines are available in Dutch and French.

Then, what about non-US law? We did some test with European Law, more specifically about GDPR, and overall, these tests went well. We did not test on recent legislation or jurisprudence.

We also briefly did some tests with Belgian law. We thought art. 1382 of the Civil Code would be an interesting test case, given that it was recently replaced by a new book 6 on extra-contractual liability. We ran the test on ChatGPT, CoPilot, Gemini, Claude, Perplexity, Grok, and you.com. Only four out of seven pointed out that art. 1382 CC had been replaced. They were ChatGPT, CoPilot, Gemini and Grok. The other three, Claude, Perplexity, and You.Com, all did not mention book 6 on extracontractual liability at all.

So, while caution and supervision are already needed for US and EU law, it is even more the case for the law of EU member states, where several generative AI platforms were not (yet) aware of recent legislation.

Conclusion

Using AI for legal research holds promise, but supervision is still very much needed. The above examples show how they can still hallucinate, and that they may not be aware of recent changes in legislation or jurisprudence.

 

Sources:

 

AI Agents are the next big thing

In our previous article, we looked at legal technology predictions for 2025. Several experts predicted that AI agents would be the most important evolution. So, let’s have a closer look. In this article, we will answer the following questions, “what are AI agents? “, and “why are they important?”. We will also talk about AI agents in legal technology.

What are AI Agents?

An artificial intelligence (AI) agent is a software program that can autonomously interact with its environment, collect data, and use the data to perform self-determined tasks to meet predetermined goals. Humans set goals, but an AI agent independently chooses the best actions it needs to perform to achieve those goals. So, it is a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools. AI agents may improve their performance with learning or acquiring knowledge.

IBM explains that “AI agents can encompass a wide range of functionalities beyond natural language processing including decision-making, problem-solving, interacting with external environments and executing actions. These agents can be deployed in various applications to solve complex tasks in various enterprise contexts from software design and IT automation to code-generation tools and conversational assistants. They use the advanced natural language processing techniques of large language models (LLMs) to comprehend and respond to user inputs step-by-step and determine when to call on external tools.”

Why are they important?

Some refer to agentic AI as the third wave of the AI revolution. The first wave was predictive analytics where AI could crunch large datasets to discover patterns and make predictions. The second wave was generative AI, that uses deep learning and large language models (LLM) that can perform natural language processing tasks. And now, the third wave consists of AI agents that can autonomously handle complex tasks.

Because they can autonomously handle complex tasks, and better than ever before, AI agents can change the way we work. One headline gives the example of an AI agent that can reduce programming from months to days. There already are E-commerce agents, sales and marketing agents, customer support agents, hospitality agents, as well as dynamic pricing systems, content recommendation systems, autonomous vehicles, and manufacturing robots, for example. And they all can do the work that was previously done by humans.

AI agents clearly offer several benefits. They can dramatically improve productivity, as they can handle complex tasks without human supervision or intervention. And because processes are automated, this also reduces the costs. AI agents can also be used to do research which in turn allows to make informed decisions. AI agents also lead to an improved customer experience because they can “personalize product recommendations, provide prompt responses, and innovate to improve customer engagement, conversion, and loyalty.”

But, as with any breakthroughs in AI, it is important the remain aware that there always is a dark side, too. Already there are warnings about ransomware AI agents, which work autonomously, and are far more sophisticated than their predecessors.

AI Agents in legal technology

For quite a while now, legal technology has been using bots that automate certain processes. In a way, AI agents are the next generation of bots. Many legal technology experts predicted that 2025 would be the year of the legal AI agents.

A selection of predictions on AI agents in legal technology

The National Law Review, also quoted in last month’s article, interviewed more than sixty experts on legal technology. Several of them talked about AI agents in legal technology. Here is a selection of quotes.

Gabe Teninbaum stated that “The biggest surprise in legal AI in 2025 will be the emergence of agentic AI—systems capable of taking autonomous, goal-driven actions within set parameters. These tools won’t just assist lawyers but will independently draft contracts, conduct negotiations, and even manage compliance, pushing the profession to redefine what it means to “practice law.”” And “by 2025, legal AI will shift from supporting tools to decision-making partners, with agentic systems managing tasks like compliance monitoring and preliminary dispute resolution. The surprise won’t be AI’s capability—it will be the speed at which clients demand its adoption.”

Nicola Shaver said, “Agentic AI, with the capability to automate legal workflows end-to-end, will become more prevalent in 2025, as will AI-enabled workflows generally. We will see a move away from the chatbot model to generative AI that is built into the systems where lawyers work and that mimics the way lawyers work, making it easier to adopt. Lawyers should expect to access custom apps for their legal practice areas in places like their document management or practice management systems and will adopt the tools that they like at a deeper level. In 2025, some lawyers will be using generative AI on a daily basis without even noticing it, since it will be an enabler of so many systems in the back end with less of the prompting burden sitting with end users.”

Tom Martin echoes a similar sentiment, calling Agentic AI “a transformative leap in the direct provision of legal services, driven by strengthening multimodal AI models, agentic capabilities, seamless machine-level orchestration, and evolving regulations governing AI-driven legal entities. This shift won’t just streamline existing workflows; it will redefine the way legal services are conceived, delivered, and experienced.”

Jon M. Garon observes that, “The potential for user-operated agents will grow exponentially as these apps create the power to automate calendaring, meeting coordination, note-taking, work-out buddies, and much more, becoming true personal assistants. Lawyers will need to be careful that the agents do not disclose personal or client data, but with that problem solved, these will grow into a significant new market. ”

Evan Shenkman explains it as follows: “Think about tools that can listen in on depositions, trials, or client intake meetings, and provide the attorney — in real-time — with AI-powered guidance and assistance (issue spotting, identifying inconsistencies or falsehoods, etc.) based on the tool’s prior review and analysis of the entire case file. Or tools that can continually review the case docket, and then unilaterally alert the attorney of what just happened, what now needs to be done, and include GenAI-created proposed drafts based on prior firm samples. These tools are already in the works and will be mainstream soon enough. ”

Benefits of AI Agents in legal technology

The benefits AI Agents will bring to the field of legal technology apply not only to lawyers, but to all legal service providers, including alternative legal service providers.

One of the obvious primary advantages of AI agents in the legal field is their ability to enhance efficiency and reduce costs. Bots have already been doing that to a certain extent by automating repetitive tasks such as document review, legal research, and contract analysis. AI agents are expected to take this process of automating tasks to a new level where entire workflows and more complex tasks will be handled by them as well. This will free up valuable time for attorneys to focus on more complex and strategic aspects of their work. This not only increases productivity but also reduces the likelihood of human error, leading to more accurate outcomes.

The capability to process and analyse large volumes of data at speeds is particularly beneficial in legal research: AI can quickly sift through case law, statutes, and regulations to provide relevant information and insights.

Another significant benefit is the improved client service. By providing real-time updates and centralized document management, these agents encourage better collaboration within legal teams. This leads to more cohesive workflows and ensures that all team members are informed and aligned. All of this contributes to enhancing the client experience. (Several experts, some of whom are quoted above, predict that client demand will be a major factor in the adoption of AI agents).

AI agents also support transfer learning, which enables them to apply knowledge gained in one context to new, related tasks. This reduces the need for extensive retraining and allows legal professionals to leverage AI capabilities across various areas of law.

 

Sources:

Legal technology predictions for 2025

At the end of the year and the beginning of a new one, many publications give their predictions for the new year. In this article, we will go over a selection of legal technology predictions for 2025. We can group them in four categories: legal technology predictions that do not involve AI, predictions on legal issues involving AI, predictions on AI in legal services, and finally, some other legal technology predictions on AI.

Legal technology predictions that do not involve AI

While most of the authors focus on the growing impact of AI, there also are legal technology predictions that do not involve it.

A first set of predictions has to do with client demands. Authors anticipate a significant further proliferation of blockchain, cryptocurrencies, and smart contracts. This will result in a growing demand for lawyers who are versed in these matters. Experts also predict that clients’ expectations will keep on rising, and that law firms will have to adapt to that demand. Already, the legal industry is witnessing a shift towards more client-centric services. Overall, experts also predict a growing demand for legal services for SMBs.

A second set of predictions has to do with the investments law firms will be making. Experts predict an overall increase in investments in technology, and more specifically, apart from AI, increases in spending on knowledge management and on cybersecurity.

Cybersecurity remains a critical concern for law firms, especially with the growing reliance on digital tools and AI. The sector is expected to invest more in cyber resilience strategies to counter potential threats, ensuring the protection of sensitive legal data and maintaining client trust. General counsels and Chief Legal Officer need to up their game when it comes to cybersecurity.

Finally, experts expect the billable hour to further decline, and fixed fees and subscription billing to increase.

Predictions on legal issues involving AI

Several authors also focus on legal issues involving AI. On the one hand, there is the topic of regulating AI, and on the other hand, there is the topic of litigation.

Both the EU and the Council of Europe (CoE) published their frameworks on regulating AI. Unlike the EU AI Act, the Council of Europe’s Treaty is open to all countries who want to sign up. More sign-ups are expected. When it comes to the US, the situation is unclear, as the incoming Trump administration may withdraw from the CoE Treaty. Most experts do not expect the Trump administration to impose its own framework. Several authors do see initiatives on both a state level and on the level of local bar associations. The latter may impose ethical rules regarding the use of AI in law firms, especially when it comes to lawyers using generative AI.

There also is an anticipated increase in litigation related to AI tools and practices. One of the areas where experts predict more litigation involves the disputes over unauthorized use of copyrighted materials for AI training. They also expect an increase of product liability lawsuits involving AI-systems. And an increase in litigation is also anticipated when it comes to AI-induced biases in processes like job screening, and potential antitrust violations stemming from AI-driven pricing tools.

Predictions on AI in legal services

Most of the predictions, however, focus on how Artificial Intelligence will impact the delivery of legal services. And the topic that is most talked about is the introduction of AI agents in the delivery of legal service. Some call it the most important evolution for 2025.

So, what are we talking about? An AI agent is a software program designed to operate independently, perceiving its environment, analysing information, and taking actions to achieve specific goals. It gathers data through sensors or input systems, processes this data using logic or machine learning models, and performs tasks or interacts with its surroundings based on its objectives. These agents are widely used in applications such as virtual assistants, self-driving cars, and automated decision-making systems, allowing them to function without constant human intervention. So, you can think of them as the next generation, more advanced and more versatile bots. And in 2025, they’re expected to have a huge impact on the delivery of legal services and on the way that law firms and legal departments operate. We will discuss AI agents more in depth in a follow-up article.

AI is also become more integrated in all aspects of the delivery of legal services, from optimizing and automating workflows, enhancing knowledge management, and handling specific tasks autonomously. Most experts anticipate that all cloud-based software for lawyers and law firms will be integrating more AI into their systems. Overall, authors also predict that generative AI will become better and more specialized in specific legal areas.

Several authors talk about how artificial intelligence is already leading to a sharp increase in productizing legal services. This applies to law firms, legal departments, but also to alternative legal service providers. Some expect hybrid lawyers and/or self-service legal platforms to become as ubiquitous as online banking. Some even anticipate that more and more lawyers will start collaborating with robot lawyers. And for the first time, some even predict that within 5 years, the combination of the advances in AI and breakthroughs in quantum computing will start replacing entry level lawyers.

Other legal technology predictions on AI

Some experts also made some other legal technology predictions on AI. They are optimistic that Generative AI will improve access to justice, and that we will see courts who will start using Generative AI, as well to become more effective.  They also expect a consolidation movement in the market of legal technology service providers. Finally, some expect that Legal AI and Generative AI will become part of law school curriculum.

 

Sources:

 

Recent Artificial Intelligence Regulations (2024)

In the past, we have discussed the need for artificial intelligence regulations. The first important initiative was the OECD establishing a series of non-binding guidelines in 2019. Another milestone was the EU AI Act of March 2024. Also in 2024, there were some other important regulation frameworks that were introduced. In this article, we will have a look at the Council of Europe Framework Convention on AI, at the United Nations’ AI Resolution, as well as at other artificial intelligence regulations and initiatives, including The Responsible AI in the Military Domain (REAIM) summit in Seoul.

Council of Europe Framework Convention on AI

The full name of the Council of Europe’s Framework Convention on AI is the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. It is the first legally binding international treaty that specifically focuses on regulating artificial intelligence (AI) in line with fundamental rights and values. These are meant to ensure that AI systems respect human rights, support democratic principles, and adhere to the rule of law.

The convention is an initiative of the Council of Europe, which is an international organization that was founded in 1949. Its goals are similar to the UN’s Declaration of Human Rights. It has 46 member states and focuses on promoting human rights, democracy, and the rule of law in Europe.

The history of the framework convention on AI began in 2020 when the Council recognized the need for a legal framework for AI. In 2021, they launched discussions among member states and experts. The aim was to draft a convention that would safeguard fundamental rights while fostering innovation. In 2023, the Council presented a draft of the convention. The framework was officially adopted on 17 May 2024, and was opened for signatures from 5 September 2024 to countries both within and outside Europe, making it a globally significant agreement. Notable is that apart from the 46 Council of Europe member states, another 11 countries – including the US – have signed it as well. More may follow.

Much like the EU AI act, the convention introduces a risk-based approach, addressing the design, deployment, and decommissioning of AI systems. It emphasizes transparency, accountability, and fairness while encouraging responsible innovation. High-risk AI applications, such as those with the potential to harm human rights, are subject to strict oversight. The treaty also allows flexibility for private actors to comply through alternative methods and includes exemptions for research and national security purposes.

This framework is crucial as it provides a common international standard for managing AI’s potential benefits and risks. It promotes trust in AI technologies by ensuring safeguards against misuse and unintended consequences while fostering innovation. The convention aligns closely with the European Union’s AI Act, reinforcing a shared commitment to ethical AI governance on a global scale.

The treaty is also important because of its ability to shape how AI is integrated into societies, balancing innovation with protecting democratic values. It seeks to protect individuals’ rights. AI systems can make decisions that affect people’s lives, such as in job recruitment or law enforcement. The convention safeguards that these systems are fair and transparent is crucial. The convention also promotes accountability. It requires AI developers and users to take responsibility for their systems. This helps build trust between the public and technology. Furthermore, the convention supports democracy. It emphasizes the need for public participation in discussions about AI. This ensures that diverse voices are heard in shaping policies. Finally, it sets a precedent and standard for other countries. If Europe leads in AI regulation, other regions may follow. This can create a global framework for responsible AI use.

The United Nations’ AI Resolution

On March 21, 2024, the United Nations General Assembly adopted its first-ever and non-binding resolution on artificial intelligence (AI). This resolution promotes the development of “safe, secure, and trustworthy” AI systems. It is another significant step in creating global norms for managing AI, which also aims to ensure the technology benefits humanity while addressing its risks. The resolution was led by the United States and co-sponsored by 123 countries, receiving unanimous support from all 193 UN member states.

Here, too, the history of this resolution traces back to the rapid growth of AI technology. As AI started to impact various sectors, concerns about its effects on society grew. We are talking about issues like privacy, bias, and the potential for misuse, which all became prominent. In response, the UN began discussions about how to address these challenges. In 2023, member states began drafting the resolution. After extensive negotiations, the resolution was adopted in March 2024.

The resolution recognizes the transformative potential of AI in addressing global challenges, such as achieving the United Nations’ Sustainable Development Goals. It encourages international cooperation to bridge digital divides, especially between developed and developing countries. One of its goals is to ensure equitable access to AI technologies. Member states are urged to regulate AI systems to protect human rights and privacy, avoid risks, and promote innovation.

Like other regulatory initiatives, this one also underscores the need for global collaboration in governing AI. There is a growing consensus that international regulation is critical to harnessing its benefits responsibly. The resolution aligns with similar efforts, like the European Union’s AI Act and the Council of Europe’s Framework Convention. It emphasizes the importance of ethical, human-centric AI development. It aims to prevent harm while promoting trust in AI systems globally.

This resolution’s importance lies in its acknowledgment of AI’s dual potential: as a tool for progress and a source of risks if left unchecked. It, too, provides a foundation for international frameworks to guide AI use in a way that supports sustainable development and safeguards fundamental rights.

Other artificial intelligence regulations and initiatives

The Global AI Safety Summit

The Global AI Safety Summit is a recurring international conference that discusses the safety and regulation of artificial intelligence (AI). The first Global AI Safety Summit was held on November 1–2, 2023 at Bletchley Park in Milton Keynes, United Kingdom. The summit’s goals included:

  • Developing a shared understanding of the risks of frontier AI
  • Establishing areas for collaboration on AI safety research
  • Launching the UK Artificial Intelligence Safety Institute
  • Testing frontier AI systems against potential harms

The summit was concluded with the Bletchley Declaration, which was signed by 28 nations.

The second Global AI Safety Summit in September 2024 was co-hosted by Britain, South Korea, and others. (See below: The Responsible AI in the Military Domain summit). The third Global AI Safety summit will be held in February 2025 in France.

The Global AI summit brings together international governments, leading AI companies, civil society groups, and experts in research. The summit aims to a) consider the risks of AI, especially at the frontier of development, b) discuss how to mitigate those risks through internationally coordinated action, and c) to understand and mitigate risks of emerging AI while seizing opportunities. The overall goal is the prevention and mitigation of harms from AI, which could be deliberate or accidental. These harms could be physical, psychological, or economic.

The Responsible AI in the Military Domain (REAIM) summit in Seoul

The Responsible AI in the Military Domain (REAIM) summit was held in Seoul on 10 September 2024. About 60 countries including the United States endorsed a “blueprint for action” to govern responsible use of artificial intelligence (AI) in the military. Important is that China did not endorse this blueprint.

The summit was a follow-up to one held in The Hague in 2023 where countries agreed upon a call to action on the topic, that would not be legally binding.

The US AI Safety Summit

A separate global AI safety summit was planned by the Biden administration in the US. The idea is similar: it wants to bring the leading stakeholders together to identify key issues, and to suggest ideas for a regulation framework. President-elect Trump, however, had indicated that he would undo any such framework, so the plans were put on hold.

 

Sources:

 

An introduction to AI computers for lawyers

AI Computers are being called the biggest development in the PC industry in 25 years. Experts believe they could also trigger a refresh cycle in the PC-industry. In this article, we will answer the following questions. What are AI computers? What are the benefits of AI computers, and what are the benefits for lawyers? Do you, as a lawyer need to get yourself one? What are the challenges and limitations for legal work?

What are AI computers?

So, what are AI computers? The term was launched by Intel. They describe it as follows: an AI PC has a CPU, a GPU and an NPU, each with specific AI acceleration capabilities. An NPU, or neural processing unit, is a specialized accelerator that handles artificial intelligence (AI) and machine learning (ML) tasks right on your PC instead of sending data to be processed in the cloud. The GPU and CPU can also process these workloads, but the NPU is especially good at low-power AI calculations. The AI PC represents a fundamental shift in how our computers operate. It is not a solution for a problem that didn’t exist before. Instead, it promises to be a huge improvement for everyday PC usages.

In other words, AI PCs are regular personal computers that are supercharged with specialized hardware and software. These are specifically designed to handle tasks involving artificial intelligence and machine learning. When it comes to the hardware, what stands out is the presence of an NPU, i.e., a Neural Processing Unit. Its job is to accelerate AI workloads, particularly those that require real-time processing, like voice recognition, image processing, and deep learning applications.

AI PCs also run specialized software stacks, frameworks, and libraries tailored for Artificial intelligence and Machine Learning workloads. “The distinction between AI software and ‘normal’ software lies in how each type of application processes the work you ask it to do. A conventional application just provides pre-defined tools not unlike the specialty tools in a toolbox: you must learn the best way to use those tools, and you need personal experience in using them to be effective on the project at hand. It’s all up to you, every step of the way. In contrast, AI software can learn, make decisions, and tackle complex creative tasks in the same way a human might. That learning capability gives you a new kind of tool that can simply do the job for you at your request, because it has been trained to do so. This fundamental difference enables AI software to automate complex tasks, offer personalized experiences, and process vast amounts of data efficiently, transforming how we interact with our computers.”

Benefits of AI computers

Why were AI computers created in the first place? Generative AI has become extremely popular. But it puts high workloads on the cloud servers AI is using. The idea is to share that workload with the PCs of the users. And for that, you need to have powerful PCs with the necessary hardware and software. In short, AI computers are beneficial for the users, as well as for the manufacturers and AI service providers.

Benefits for users

Experts have identified many potential benefits for the users. AI PCs can boost productivity, enhance creativity, and improve user experience. Below are some of the key advantages the literature mentions, in random order.

Enhanced and accelerated performance for AI Tasks: AI PCs are equipped with hardware specifically designed to tackle demanding AI applications. This translates to faster processing of complex calculations and data analysis, crucial for tasks like video editing, scientific simulations, and training AI models. This acceleration can significantly speed up the training and inference of deep learning models. And other applications like video conferencing, e.g., can also greatly benefit from this enhanced performance.

Improved efficiency and automation: AI features can automate repetitive tasks, freeing you up for more strategic work. Imagine software that automatically categorizes your files or optimizes battery life based on usage patterns.

Improved power efficiency: AI accelerators like TPUs are designed to be power-efficient, consuming less energy while delivering high performance. Laptop batteries, e.g., will last longer before needing recharging. AI PCs can lead to lower operating costs and a smaller environmental footprint.

Personalized User Experience: AI can learn your preferences and adjust settings accordingly. Brightness, keyboard responsiveness, and even video call framing could adapt to your needs, creating a more comfortable and efficient work environment.

Boosted Creativity: some AI PCs come with built-in creative tools that can generate ideas, translate languages, or even write different creative text formats based on your prompts. This can be a game-changer for designers, writers, and anyone looking for a spark of inspiration.

Enhanced Security: AI-powered security features can constantly monitor for threats and potential breaches, offering an extra layer of protection for your data.

Benefits for chip manufacturers and for service providers

The new AI computers do not only benefit the users. As mentioned before, having part of the workload done on the users’ side, also considerably reduces the workload on the servers of the AI service providers. One expert even estimates that, “By end of 2025, 75% of enterprise-managed data will be processed outside the data centre.” So, service provides will have to invest less in infrastructure.

At the same time, AI PCs can be useful in the data centre, too. Two important benefits they offer are scalability and a faster time-to-market. Many AI PCs support multiple AI accelerators, allowing for scaling up the computational power by adding more accelerators as needed. This scalability enables handling larger and more complex AI models and workloads. The accelerated performance of AI PCs can also significantly reduce the time required for training AI models, enabling faster iteration and deployment cycles for AI applications and solutions.

The introduction of a new type of personal computers is of course also good news for the manufacturers, as it creates a new – and booming – market. It should not come as a surprise then, that all major chip manufacturers like Intel, Nvidia, AMD, and Qualcomm have started making NPU chips. Apple, too, has announced new chips that are AI optimized. It is safe to assume that soon all new PCs, laptops, and tablets will be AI computers.

Benefits for lawyers

All of this then begs the questions, do you, as a lawyer, need one? Well, apart from the abovementioned benefits, AI computers can offer lawyers specific benefits, too. They can, e.g., significantly enhance the efficiency of legal practices by automating routine tasks such as document review, legal research, eDiscovery, and contract analysis. Experts anticipate the following benefits.

Improved Legal Research: AI can analyse vast amounts of legal documents, regulations, precedents, and case law, helping lawyers identify relevant precedents and arguments much faster. This can save significant time and effort compared to traditional research methods.

Contract analysis and enhanced due diligence: AI can sift through contracts and financial records, highlighting potential risks and areas requiring closer scrutiny during due diligence processes. This typically can be a time-consuming task for lawyers, where AI can do it very fast. Add to that that it can improve the accuracy and efficiency of legal reviews.

Legal document analysis, review, and drafting assistance: AI-powered tools can help lawyers draft legal documents by suggesting language, identifying inconsistencies, and ensuring compliance with regulations. AI models can also be trained to analyse and extract relevant information from large volumes of legal documents, contracts, and case files. The computational power of AI PCs can speed up this process significantly.

Predictive analytics: with the help of AI PCs, lawyers can develop predictive models to analyse the potential outcomes of legal cases based on historical data and various factors.

Natural language processing (NLP): AI PCs can be used to train and deploy NLP models for tasks like legal document summarization, information extraction, and sentiment analysis.

Challenges and limitations for legal work

At present, however, AI computers are still facing some challenges and limitations when it comes to legal work. While AI PCs can provide computational advantages, many legal applications may not require the full power of these specialized systems. For routine legal work, such as drafting documents or conducting basic research, regular desktop or laptop computers might suffice.

AI computers still have limited judgment and creativity. The core tasks of lawyers often involve legal reasoning, strategy, and creative problem-solving, areas where AI is still not very advanced. AI PCs can’t replace a lawyer’s ability to analyse complex situations, develop persuasive arguments, or adapt to unexpected circumstances in court.

There also is the issue of data dependence and accuracy: the effectiveness of AI tools heavily relies on the quality and completeness of the data they’re trained on. Legal data can be complex and nuanced, and errors in the data can lead to inaccurate or misleading results.

The benefits may not justify the higher costs. AI PCs can be significantly more expensive than traditional PCs. For lawyers who don’t handle a high volume of complex legal matters that heavily rely on AI-powered research or due diligence, the cost may therefore not be justified.

CONCLUSION

AI PCs can be a valuable tool for lawyers, especially for tasks like legal research and due diligence. However, they shouldn’t be seen as a replacement for human lawyers. AI is best used to augment a lawyer’s skills and expertise, not replace them. And at present, AI computers may be overkill when it comes to day-to-day legal work, where existing computers can handle the workload and the extra cost of an AI pc is not justified.

It is also important to consider that the technology used in AI computers is a new and evolving tech. AI PCs are a relatively new concept, and the functionalities are still under development. The “killer application” that justifies the potentially higher cost might not be here yet. Add to that, that to fully benefit from AI features, you’ll need compatible software that can leverage the AI capabilities of your PC.

The decision to invest in AI PCs for legal work would depend on factors such as the specific use cases, the volume of data or workload, the complexity of the AI models required, and the potential return on investment. Law firms or legal departments with a significant focus on AI-driven legal technologies may find AI PCs more beneficial than those with more traditional workflows. But for many lawyers, a traditional PC with good legal research software might still be the most practical solution.

 

Sources:

 

The EU AI Act

In previous articles, we discussed the dangers of AI and the need for AI regulations. On 13 March 2024, the European Parliament approved the “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.” The act was a proposal by the EU Commission of 21 April 2021. The act is usually referred to by its short name of the Artificial Intelligence Act, or the EU AI Act. In this article we look at the following questions: what is the EU AI Act? What is the philosophy of the EU AI Act? We discuss the limited risk applications and the high-risk applications. Finally, we also look at the EU AI Act’s entry into force and the penalties.

What is the EU AI Act?

As the full title suggests, it is a regulation that lays down harmonised rules on artificial intelligence across the EU. Rather than focusing on specific applications, it deals with the risks that applications pose, and categorizes them accordingly. The Act imposes stringent requirements for high-risk categories to ensure safety and fundamental rights are upheld. The Act’s recent endorsement follows a political agreement reached in December 2023, reflecting the EU’s commitment to a balanced approach that fosters innovation while addressing ethical concerns.

Philosophy of the EU AI Act

The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular for small and medium-sized enterprises (SMEs). The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

The AI Act ensures that Europeans can trust what AI has to offer. Because AI applications and frameworks can change rapidly, the EU chose to address the risks that applications pose. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes. The Act distinguishes four levels of risk:

  • Unacceptable risk: applications with unacceptable risk are never allowed. All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
  • High risk: to be allowed high risk applications must meet stringent requirements to ensure safety and fundamental rights are upheld. We have a look at those below.
  • Limited risk: applications are considered to pose limited risk when they lack transparency, and the users of the application may not know what their data are used for or what that usage implies. Limited risk applications can be allowed if they comply with specific transparency obligations.
  • Minimal or no risk: The AI Act allows the free use of minimal-risk and no risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

Let us have a closer look at the limited and high-risk applications.

Limited Risk Applications

As mentioned, limited risk refers to the risks associated with a lack of transparency. The AI Act introduces specific obligations to ensure that humans are sufficiently informed when necessary. E.g., when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can make an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes.

High Risk Applications

Under the EU AI Act, high-risk AI systems are subject to strict regulatory obligations due to their potential impact on safety and fundamental rights.

What are high risk applications?

As mentioned, all AI systems considered a clear threat to the safety, livelihoods and rights of people are considered high-risk. These systems are classified into three main categories: a) those covered by EU harmonisation legislation, b) those that are safety components of certain products, and c) those listed as involving high-risk uses. Specifically, high-risk AI includes applications in critical infrastructure, such as traffic control and utilities management, biometric and emotion recognition systems. It also applies to AI used in education and employment for decision-making processes.

What are the requirements for high-risk applications?

High-risk AI systems under the EU AI Act are subject to a comprehensive set of requirements designed to ensure their safety, transparency, and compliance with EU standards. These systems must have a risk management system in place to assess and mitigate potential risks throughout the AI system’s lifecycle. Data governance and management practices are crucial to ensure the quality and integrity of the data used by the AI, including provisions for data protection and privacy. Providers must also create detailed technical documentation that covers all aspects of the AI system, from its design to deployment and maintenance.

Furthermore, high-risk AI systems require robust record-keeping mechanisms to trace the AI’s decision-making process. This is essential for accountability and auditing purposes. Transparency is another key requirement, necessitating clear and accessible information to be provided to users and ensuring they understand the AI system’s capabilities and limitations. Human oversight is mandated to ensure that AI systems do not operate autonomously without human intervention, particularly in critical decision-making processes. Lastly, these systems must demonstrate a high level of accuracy, robustness, and cybersecurity to prevent errors and protect against threats.

The EU AI Act’s entry into force

The act will enter into force two years after it was approved, i.e., on 13 March 2026. This gives member states the opportunity to implement compliant legislation. It also gives providers a two-year window to adapt to the regulation.

The European AI Office, established in February 2024 within the Commission, will oversee the AI Act’s enforcement and implementation with the member states.

Penalties

The EU AI Act enforces a tiered penalty system to ensure compliance with its regulations. For the most severe violations, particularly those involving prohibited AI systems, fines can reach up to €35 million or 7% of the company’s worldwide annual turnover, whichever is higher. Lesser offenses, such as providing incorrect, incomplete, or misleading information to authorities, may result in penalties up to €7.5 million or 1% of the total worldwide annual turnover. These fines are designed to be proportionate to the nature of the infringement and the size of the entity, reflecting the seriousness of non-compliance within the AI sector.

Conclusion

The EU AI Act represents a significant step in the regulation of artificial intelligence within the EU. It sets a precedent as the first comprehensive legal framework on AI worldwide. The Act mandates a high level of diligence, including risk assessment, data quality, transparency, human oversight, and accuracy for high-risk systems. Providers and deployers must also adhere to strict requirements regarding registration, quality management, monitoring, record-keeping, and incident reporting. This framework aims to ensure that AI systems operate safely, ethically, and transparently within the EU. Through these efforts, the European AI Office strives to position Europe as a leader in the ethical and sustainable development of AI technologies.

 

Sources:

 

The 2024 law firm

Usually, at the beginning of a new year, we look back at the trends in legal technology of the last year. Unfortunately, the reports that are needed to do that are not available yet. So, instead, with Lamiroy Consulting turning 30 in February 2024, we will have a look at the 2024 law firm, and at how law firms have evolved over the last decades. We will discuss a range of topics concerning technology and automation in the 2024 law firm, including artificial intelligence, communications. We will see how the cloud, remote work, and virtual law offices have changed law firms, etc.

Technology and automation in the 2024 law firm

Let us go back in time. The early 80s saw the introduction of the first personal computers and home computers. Word Processors had been around since a few years before that. They were found in only a very small minority of law firms at the time. The Internet as we know it, did not exist yet. By the time 1994 came, things had started to change, and a legal technology revolution was on the horizon. Fast forward to 2024. Law firms that don’t use computers or equivalent mobile devices are an endangered – if not extinct – species. Many operational processes in the law firm have been automated.

So, it is safe to say that over the past decades, technology and automation have transformed the legal industry in many ways. According to a report by The Law Society, some of the factors that have contributed to this transformation include increasing workloads and complexity of work, changing demographic mix of lawyers, and greater client pressure on costs and speed.

Two of the most significant changes has been the introduction of the internet with its cloud technologies and of Artificial Intelligence (AI). Most of the evolutions described below have been made possible by the Internet.

Artificial Intelligence

The introduction of Artificial Intelligence (AI) has been one of the main factors driving a substantial transformation of the legal industry. One of its main benefits has been that it notably improved attorney efficiency. AI plays a key role in tasks such as sophisticated writing, research, and document drafting, significantly expediting processes that traditionally could take weeks.

Communications

Law firms have moved from traditional methods of communication such as snail mail to more modern methods. These include email, client portals, and cloud-based communications, like Teams and SharePoint. They allow people to share documents with different levels of permissions, ranging from reading and commenting to editing.

Client portals have become increasingly popular in recent years, allowing clients to access their legal documents and communicate with their attorneys in real-time. This has made it easier for clients to stay informed about their cases and has improved the efficiency of law firms.

Cloud, Remote work, and Virtual Law Office

The legal industry has experienced a notable surge in remote work and virtual law offices. Much of this has been particularly accelerated by the COVID-19 pandemic. Virtual law offices, facilitated by cloud-based practice management software, offer attorneys the flexibility to work from anywhere, leading to increased flexibility and reduced overhead costs for law firms. The cloud has played a crucial role in this shift. It allows virtual lawyers to run fully functional law firms on any device with significantly lower costs compared to on-premise solutions.

Digital Marketing and Online Presence

The legal industry has also witnessed major changes in its marketing practices over the past decades, adapting to the internet era. Recent studies indicates that one-third of potential clients initiate their search for an attorney online. This emphasizes the importance of a strong online presence for law firms to stay competitive. Law firms now prioritize digital marketing through channels like social media, email, SEO, and websites. Whether marketing the entire firm or individual lawyers, a robust digital strategy is essential for establishing credibility and connecting with potential clients. Personal branding is crucial for individual lawyers, highlighting achievements and values, while law firms should focus on building trust through a comprehensive digital marketing strategy.

Billing and Financial Changes in the 2024 Law Firm

Another area where the legal industry has undergone significant changes is in billing and financial practices. In the past, law firms relied on traditional billing methods such as paper invoices and checks. However, with the advent of technology, law firms have shifted to digital billing methods such as electronic invoices and online payment systems. This has made the billing process more efficient and streamlined. In addition to digital billing methods, law firms have also adopted new financial practices such as trust accounting. Trust accounting is a method of accounting that is used to manage funds that are held in trust for clients. This is particularly important for law firms that handle client funds, such as personal injury or real estate law firms.

Over the last decades, alternative fee arrangements (AFAs) have also significantly impacted the legal industry. They did so by offering pricing models distinct from traditional hourly billing. AFAs, including fixed fees, contingency fees, and blended rates, have gained popularity as clients seek greater transparency and predictability in legal fees. A recent study identified 22 law firms excelling in integrating AFAs into their service delivery, earning praise from clients for improved pricing and value. The study underscores a client preference for firms providing enhanced pricing and value. This emphasizes how AFAs not only contribute to better relationships between law firms and clients but also demonstrate the firms’ commitment, fostering trust and credibility.

Legal Research and Analytics

Legal research and analytics have also been revolutionised over the last decades, making the process more efficient and accessible. We have seen primary and secondary legal research publications become available online. Facilitated by information and communication technologies, this has replaced traditional storage methods like compact discs or print media. This shift has not only increased accessibility but also allowed legal professionals to conduct more comprehensive and accurate research. Furthermore, the emergence of legal analytics has empowered professionals to enhance legal strategy, resource management, and matter forecasting by identifying patterns and trends in legal data.

Client Expectations

Another notable change is that clients’ expectations of lawyers have evolved significantly. A recent survey highlights that 79% of legal clients consider efficiency and productivity crucial, indicating a demand for more effective legal services. Additionally, clients now expect increased accessibility and responsiveness from their lawyers, prompting law firms to integrate new technologies such as client portals and online communication tools. Transparency in fees and billing practices is also a priority for clients, leading to the growing adoption of alternative fee arrangements by law firms. (Cf. above).

Globalization

Finally, globalization has significantly impacted the legal industry. It forced law firms to adapt to a changing global landscape and heightened demand for legal services across borders. Many European law firms, these days, are members of some international legal network, with branches in many EU countries. And a recent study highlights the emergence of a new corporate legal ecosystem in emerging economies like India, Brazil, and China. This presents opportunities for law firms to expand globally. In response to the globalization of business law and the increasing demand from transnational companies, law firms are transforming their practices. They do so by merging across borders and creating mega practices with professionals spanning multiple countries. This shift has prompted the adoption of new managerial practices and strategies to effectively manage global operations within these law firms.

Sources: