Legal Technology Predictions for 2026

Sticking to the tradition of new year predictions, here is a selection of legal technology predictions for 2026. By now, AI has become ubiquitous. So, it shouldn’t come as a surprise that most authors spend plenty of time on AI-related predictions. We also discuss market-related predictions, as well as at predictions about the law firm and legal practice. As can be expected, these three categories overlap.

AI-related predictions

The rise of AI in the law firm seems unstoppable at present. All authors in the sources below give their own AI-related predictions. These are the common themes.

AI becomes the defining force in legal technology

AI is going to reshape legal technology in a big way. It is becoming a fundamental part of how law firms work. AI doesn’t just write things for you. It manages entire tasks from start to finish. It can sort through cases, pull together documents, and help you make decisions. Most experts mention the same trend: AI tools that can think through problems, make plans, and handle complex legal work on their own. Basically, we’re shifting from AI as an assistant to AI as a genuine work partner. AI is fundamentally changing how lawyers do their jobs. It assists in everything from case management to predicting outcomes to organizing workflows.

AI is also changing how clients and lawyers interact from the very first conversation. More and more clients are showing up with advice they’ve already gotten from an AI tool. As a result, law firms need their own AI systems to sort through and make sense of what clients bring to the table before a human lawyer even gets involved. The firms that have already woven AI into their daily operations are going to cash in on that head start. The ones still dragging their feet risk getting left behind.

The shift from Generative AI to Agentic AI

As mentioned above, the biggest tech shift coming in 2026 is how AI works. We’re moving away from the old approach where you ask AI a question and it gives you an answer. Instead, we’re getting AI agents that can handle entire projects from beginning to end. Right now, you might need to constantly tell AI what to do next just to draft a single document. But in 2026, these AI agents will work more like digital colleagues. They’ll sort through new cases on their own, dig up relevant documents, fact-check information, and put together organized reports. And they do all of this without you needing to hold their hand through every step. This kind of “deep research” ability means AI is graduating from being a glorified word processor to something that can orchestrate complicated legal work. As a result, lawyers can oversee entire workflows without getting bogged down in every little detail.

Data-driven strategy and predictive analytics

Law firms are starting to realize that all the data they’ve been collecting – billing records, case results, which clients are actually profitable – is worth its weight in gold. In 2026, using predictive analytics isn’t cutting-edge anymore; it’s just how things are done. Firms will tap into these insights to predict how judges might rule, get ahead of opposing counsel’s next move, and figure out which types of cases actually make them money.

Market-related legal technology predictions

Several authors pay attention to changes in the legal services market in 2026.

Market consolidation and shakeouts

The legal AI market is heading for major consolidation in 2026, with more than half of current providers expected to be acquired or shut down. Companies offering basic AI features without distinctive value will struggle, while survivors will be those providing specialized capabilities and measurable results. Major vendors are moving from standalone products to integrated platforms that connect across the legal ecosystem.

Meanwhile, traditional Big Law firms face growing competition from alternative legal service providers and AI-powered firms that can handle standardized work like contracts and compliance at lower costs. Some AI vendors may even begin offering legal services directly, blurring the line between technology and law firm. This efficiency revolution is forcing the entire industry to reimagine how legal services are structured and priced.

Evolution of Revenue Models and Billing

AI and automation are making legal work so much faster that the old billable hour model is starting to fall apart. By 2026, we’re likely to see more firms experimenting with alternative fee arrangements that involve mixed pricing approaches: maybe hourly rates for some work, flat fees for other projects, and even subscription-style arrangements based on results. Why? Well, when AI can knock out routine tasks in a fraction of the time, firms need to shift their focus to charging for the high-level thinking and judgment that only experienced lawyers can provide. But this creates a new challenge: firms will need much better financial tracking for each case to make sure they’re still profitable, while also giving clients the transparency and fair pricing they’re increasingly demanding.

Cybersecurity as a Competitive Differentiator

As law firms rely more heavily on AI tools and cloud-based systems, cybersecurity is something clients actively care about and ask for. By 2026, clients will expect “zero-trust” security setups and automatic encryption as basic requirements. They’ll choose firms that can prove they have strong vendor security and handle data safely. Firms with rock-solid security systems are already using that as a major selling point to win over high-value clients, and that trend is only going to accelerate.

Predictions about the law firm and legal practice

The remaining predictions largely have to do with the law firm and legal practice. They can be grouped in four categories.

The Shift to Unified Legal Platforms

The legal industry is moving away from using multiple disconnected tools toward unified cloud platforms that act as a single system of record. Instead of juggling separate software for billing, document management, HR, and client communication, law firms and in-house legal departments are adopting integrated environments that bring everything together in one place.

These unified platforms combine matter management, document automation, spend analytics, and risk tracking into a comprehensive solution. This consolidation creates a “single source of truth” that gives legal teams better visibility into case status, resource allocation, and spending patterns while eliminating the inefficiencies of managing siloed systems.

This shift is especially beneficial for small and mid-sized firms. Cloud-native platforms make enterprise-level automation and security accessible to firms of all sizes. As legal workflows become more automated and data-driven, these integrated platforms are democratizing access to sophisticated technology across the entire legal industry.

The Rise of the AI-Augmented Legal Professional

The role of the lawyer is evolving to require new technical skills. In 2026, successful lawyers will need to master AI tools and data analysis, not just traditional legal research. This means learning how to effectively prompt AI systems and oversee their outputs. This shift is changing how junior lawyers learn. Rather than spending years on repetitive tasks like document review, they’ll develop skills through AI-powered mentorship and practice simulations. The goal isn’t to replace lawyers with technology, but to free them from routine work so they can focus on what humans do best: strategic thinking, ethical judgment, and building client relationships. Technology handles the repetitive tasks while lawyers concentrate on the work that truly requires human expertise.

Strategic Imperatives: Adoption, ROI, and Skills

The key to success in 2026 isn’t just buying AI tools, it’s using them strategically. Law firms need to carefully select secure technology partners, redesign their workflows around AI, and establish clear metrics to track efficiency gains and client outcomes. Firms that started experimenting with AI in 2024-25 will pull ahead of their competitors by deploying these tools at scale, delivering faster service at lower costs. We’ve reached a point where those who waited will struggle to catch up.

Client Experience and Business Model Innovation

AI is transforming how law firms interact with clients and charge for their services. In 2026, firms are expected to use AI not just to work faster internally, but to provide clients with better transparency and responsiveness. They use tools like real-time dashboards and collaborative portals. And, as mentioned above, as AI handles routine tasks, traditional hourly billing will increasingly give way to fixed fees and subscription models. This shift responds to client demands for predictable, value-based pricing rather than simply paying for the time lawyers spend on a matter.

Conclusion

In 2026, legal technology will be characterized by mature AI systems, fewer but more powerful platforms, and fundamental changes in how law firms operate. Firms that adopt integrated cloud platforms, predictive tools, and advanced AI will outpace their competitors, while those that resist change risk becoming irrelevant. Every technology decision will need to prioritize security and client trust, and lawyers will increasingly need both legal expertise and tech skills. The legal industry has moved past the experimentation phase. Technology is now reshaping the profession’s core structure, transforming from a helpful tool into an essential strategic partner.

 

Sources:

Liability for AI errors

Who is responsible when AI makes errors? In the last year, we have seen several cases where AI companies were taken to court. Parents of someone who committed suicide sued for negligence. There also are defamation cases because generative AI generated a factually incorrect response. In one example, a radio host sued OpenAI because ChatGPT produced a summary falsely claiming he had embezzled funds. There also have been cases involving product liability and contractual liability, among others. So, in this article, we will explore several scenarios where liability for AI errors came into play. We look at the different types of liability and at how to mitigate the liability for AI errors.

Please note that this article is not meant to provide legal advice. It is merely a theoretical exploration.

Criminal vs civil liability

Liability for AI hallucinations is both a complex and a rapidly evolving legal area, with plenty of voids and grey areas. Cliffe Dekker Hofmyer rightfully refers to it as a legal minefield.

There have been cases based on civil and on criminal liability. The situation at present is that most AI-related liability falls under civil law. This is because the claims concern compensation for harm, violation of private rights, or disputes between private parties. In many cases, the courts ruled that people are warned about hallucinations and that they use AI at their own risk. But other cases have shown that companies can be held liable for chatbot errors, and that legal professionals can face sanctions for relying on AI-generated but fictitious information.

Criminal liability connected to generative AI is currently rare because AI models lack intent. But there are scenarios where criminal law can be triggered. (See below).

Let us have a look at different types of liability.

Types of liability for AI errors

Defamation and Reputation Harm

A first series of cases involves defamation and reputation harm. Chatbots can generate false statements about individuals or organisations, sometimes with great specificity and apparent authority. When these falsehoods cause reputational damage, defamation law becomes relevant.

Early cases such as Walters v. OpenAI – the radio host mentioned above – illustrate how courts are beginning to test whether AI developers can be held responsible for hallucinated statements that damage someone’s reputation. In this case, the court ruled in favour of OpenAI. The court argued that Walters couldn’t prove negligence or actual malice, and that OpenAI’s explicit warnings about hallucinations weighed against liability. Thus far, defamation cases have largely been dismissed on those grounds.

Negligence and Duty of Care

Some lawsuits allege that AI systems failed to exercise reasonable care in situations where foreseeable harm was possible. Think of incidents of as self-harm or of the AI giving dangerous instructions.

Cases like Raine v. OpenAI and suits against Character.ai argue that developers owed a duty to implement safeguards, detect crises, or issue proper warnings. The argument is that failure to do so contributed to severe harm or even death. These cases are presently (December 2025) ongoing, and the courts have not ruled yet.

Wrongful Death and Serious Psychological Harm

In several allegations, chatbots induced, worsened, or failed to de-escalate suicidal ideation. Thus far, all cases that were taken to court have been in the US. Families of victims argue that the systems were designed in ways that made such harm foreseeable.

This category overlaps with negligence but remains distinct. Wrongful-death statutes in the US create their own remedies and set higher standards for three key elements: proximate causation, foreseeability, and the duty to protect vulnerable users.

Misrepresentation, Bad Advice, and Professional Liability

Although a chatbot is not itself a licensed professional, users often treat it as one. When a model produces incorrect legal, medical, financial, or technical advice that leads to material harm, plaintiffs may frame the issue as negligent misrepresentation or unlicensed practice through automation.

In the Mata v. Avianca sanctions case, for example, lawyers relied on non-existent precedents that were fabricated by ChatGPT. The lawyers were given a fine. This case demonstrates how professional users may be held responsible.

The case also raises questions about whether the model provider shares liability. Thus far, they have escaped liability on the same grounds as mentioned before, i.e., that the user is explicitly warned that the AI may provide them with incorrect information.

Product Liability and Defective Design

Some lawsuits frame chatbots as consumer products with design defects, inadequate safety systems, or insufficient warnings. Under this theory, the output is seen not merely as “speech” but as behaviour of a product that must meet baseline safety expectations. Claims of failure to implement guardrails, insufficient content filtering, or design choices that make harmful outcomes foreseeable fall under this category.

Contractual Liability and Terms-of-Service Breaches

AI systems are governed by contractual agreements between the user and the provider. AI developers may face contract liability if they fail to deliver promised functionality, violate their own service terms, or misrepresent their product’s capabilities. However, companies often use contractual clauses to protect themselves. These protective clauses limit liability, require arbitration, or disclaim responsibility for AI outputs. These clauses become contentious when actual harm occurs.

Copyright Infringement

Quite a few court cases involve copyright infringement where authors / creators claim that training generative AI with their works without their permission constitutes a copyright infringement. There also is a chance that the AI will use parts of their works in its responses, or that responses will be generated using several different source materials that are copyright protected. So, yes, generative AI raises serious copyright concerns, both in training and in output generation.

Thus far, we have witnessed litigation by authors, visual artists, and music publishers. In some places, copyright law has special rules that can hold AI companies responsible even if they didn’t directly copy someone’s work. These are called “contributory” and “vicarious” liability – meaning you can be liable for helping someone else infringe copyright, or for benefiting from infringement that happens under your control.

Because copyright law allows courts to award set amounts of money as damages (without needing to prove actual financial harm), this is one of the biggest financial risks AI companies face.

The AI companies on the other hand claim that training an AI falls under the “fair use” doctrine.

Privacy, Data-Protection, and Intrusion Violations

Many lawsuits claim that AI systems collect, keep, use, or expose people’s personal information without their explicit permission. These cases involve breaking data privacy laws (like Europe’s GDPR), invading people’s privacy, or misusing sensitive information. For example, a lawsuit called Cousart v. OpenAI shows how companies can be sued simply for how they handle data during training – not just for what the AI says or does afterward.

Emotional, Cognitive, and Psychological Harm

New studies show that chatbots can change how people remember things, alter their beliefs, or cause them to become emotionally dependent. Some lawsuits claim that AI chatbots harm users through these psychological effects. Plaintiffs argue that companies either intentionally designed them this way or were careless in creating systems that make people dependent, reinforce false beliefs, or worsen existing mental health problems. We’ll likely see more of these cases as we learn more about how regular AI use affects people’s minds.

Regulatory and Compliance Liability

As governments create new laws specifically for AI, companies can get in trouble for not following rules about being transparent, allowing audits, and managing risks properly. This includes laws like the EU AI Act, the Digital Services Act, and special rules for industries like healthcare or finance. Regulators can impose fines, ban certain activities, or restrict how companies operate – even without anyone filing a lawsuit.

Emerging and Hybrid Theories

Because AI doesn’t fit neatly into existing legal categories, courts and legal experts are creating new mixed approaches to determine who’s responsible when something goes wrong. These include treating AI as if it’s acting on behalf of the company, applying free speech laws to AI-generated content, or creating entirely new legal responsibilities for how algorithms influence people. As judges handle more AI cases, these hybrid approaches may eventually become their own distinct areas of law.

How to mitigate liability for AI errors

The following four suggestions can help mitigate the risks of liability.

Implement human oversight: critical decisions should not be made solely by AI without human review.

Provide training for the users: train employees on the limitations of AI tools and the importance of verifying information.

Use technical safeguards: limit an AI’s access to sensitive data and implement technical solutions to check the accuracy of its outputs.

Conduct risk assessments: before deployment, assess the potential harms of AI use and develop governance and response procedures.

 

Sources:

Retrieval Augmented Generation

In previous articles, we talked about generative AI, its benefits, and the risks that it comes with. One such risks is the fact that generative AI can hallucinate. It also doesn’t have access to the information you keep professionally. Retrieval augmented generation (RAG) addresses both issues. In this article, we answer the following questions: What is retrieval augmented generation? What are the benefits? And how can you use retrieval augmented generation with Copilot & SharePoint?

What is retrieval augmented generation?

Wikipedia defines retrieval augmented generation (or RAG) as “a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG, LLMs do not respond to user queries until they refer to a specified set of documents. These documents supplement information from the LLM’s pre-existing training data. This allows LLMs to use domain-specific and/or updated information that is not available in the training data. For example, this helps LLM-based chatbots access internal company data or generate responses based on authoritative sources. RAG improves large language models (LLMs) by incorporating information retrieval before generating responses.”

In other words, RAG enhances large language models by connecting them to external knowledge sources. Instead of relying solely on the information the model learned during training, RAG first retrieves relevant documents or data from a database, or your knowledge base. It then uses that retrieved information to generate more accurate and up-to-date responses.

The basic idea is simple: when you ask a question, the system searches through a collection of documents (like company files, research papers, or websites) to find relevant information. Then it feeds both your question and those retrieved documents to the language model. The model uses this context to produce an answer that’s grounded in your specific data rather than just its own general training knowledge. So, those are the three steps of retrieval augmented generation:

  • Retrieval: When a user asks a question, the RAG system searches an external knowledge base (like a company’s specific documents) for relevant information.
  • Augmentation: The retrieved information is then added to the original prompt, creating an “augmented” request.
  • Generation: The large language model (LLM) then generates a response based on this augmented prompt, using the external data to provide a more specific and accurate answer.

This approach solves several common problems with standard LLMs. It reduces hallucinations because the model a) bases its answers on actual retrieved text, b) allows the system to access current information beyond the model’s training cutoff date, and c) lets you use domain-specific knowledge without having to retrain the entire model. RAG is particularly useful for applications like customer support systems that need company-specific information. It is also useful for research assistants that work with scientific literature, or in any scenario where you need accurate answers based on a particular knowledge base.

Now, when you start researching retrieval augmented generation, you will often encounter the terms pipes or pipelines. It refers to the processing steps that transform a user’s query into a final response. They’re essentially the workflow or data flow that connects different components of the RAG system. The “pipe” metaphor comes from Unix pipes, where data flows from one process to another.

Different RAG implementations can have varying pipeline architectures. Some are simple with just query, retrieve, and generate stages. Others are complex with multiple retrieval steps, feedback loops, or parallel processing paths.

What are the benefits?

RAG offers several benefits that make it attractive for real-world applications.

The fact that it offers access to current and specific information is perhaps the most obvious advantage. Since the model retrieves information from your own database or documents, it can work with data that’s a) more recent than its training cutoff or b) with highly specialized knowledge that wasn’t in its original training data. This means companies can get accurate answers about their latest policies, recent research papers, or proprietary information. Depending on how you set it up, for law firms it can have access to your legal documentation, your knowledge base, your case files and/or documents.

As mentioned in the introduction, reduced hallucinations are another major benefit. When language models generate answers purely from their training, they sometimes confidently state incorrect information. RAG grounds the model’s responses in actual retrieved documents. This makes it cite or base its answers on real sources rather than just making things up. The result is that its output is more reliable and trustworthy.

Another significant is cost-effectiveness. With RAG you don’t need to fine-tune or retrain large language models every time your information changes. Instead, you simply update your document database, and the RAG system will retrieve the new information. This is far cheaper and faster than retraining models. After all, that requires substantial computational resources and technical expertise.

RAG also addresses the issues of transparency and traceability because you can see which documents the system retrieved to answer a question. This makes it easier to verify answers, debug problems, and build trust with users who can check the sources themselves.

A final benefit is referred to as domain adaptability. It means that you can quickly deploy the same base model across different domains or use cases by simply swapping out the document collection it retrieves from. One model can serve medical applications, legal research, or customer support just by changing the underlying knowledge base.

Retrieval augmented generation with Copilot & SharePoint

Interesting for law firms who use Copilot and SharePoint is that Copilot can be used in combination with SharePoint to enable RAG responses. Microsoft has made this integration quite powerful.

How does it work? Microsoft 365 Copilot offers a retrieval API that allows developers to ground generative AI responses in organizational data stored in SharePoint, OneDrive, and Copilot connectors. This means you can build custom AI solutions that retrieve relevant text snippets from SharePoint without needing to replicate or re-index the data elsewhere. The API understands user context and intent, performs query transformations, and returns highly relevant results from your Microsoft 365 content.

This approach offers several advantages for RAG implementations. You don’t need to set up separate vector databases: You can skip the traditional RAG setup that involves embedding, chunking, and indexing documents. The API automatically respects existing access controls and governance policies. This ensures security and compliance. Additionally, you can combine SharePoint data with other Microsoft 365 and third-party sources to create richer, more comprehensive responses.

For personal experimentation

If you would like to first experiment on your own, you can try Google’s new Notebook LM, which implements RAG technology. It’s an AI-powered research and writing assistant that helps users summarize and understand information from uploaded sources or specific websites.

Sources:

 

Legal chatbots in 2025

We have in the past dedicated articles to legal chatbots in 2016 and 2019. It is time for an update. In this article, we discuss trends and adoption of legal chatbots, as well as existing regulation. Then we look at legal chatbots for consumers and legal chatbots for law firms. We will do so for the US (because they are still the market leaders), the UK, and the EU.

Trends and adoption

The US has seen a rapid growth of bots / AI agents in law firms: AI adoption in US law firms surged from 19% in 2023 to 79% in 2024, with chatbots playing a central role. This market expansion is still ongoing: the US legal tech market is projected to reach $32.54 billion by 2026, with chatbots as one of the main drivers.

In the UK, adoption is most advanced in large, business-to-business (B2B) law firms. Chatbots are integrated with legal analytics, project management, and contract management systems. In contrast, we find that the B2C market lags: the business-to-consumer market is slower to adopt. Legal chatbots are most popular in firms with large-scale, commoditized services. Chatbot and AI adoption is slowed down by a lack of awareness and uncertainty about the role of AI: over one-third of UK legal professionals remain uncertain about the application of generative AI and chatbots in legal work.

In the EU on the other hand, we are witnessing an increasing adoption. There is a steady rise in chatbot use for routine legal tasks, especially among consumers and SMEs. Chatbots are also seen as tools to improve access to justice, particularly for underserved populations and cross-border matters. At the same time, there are ongoing ethical and legal debates, as there are concerns about accuracy, liability, and bias in AI-generated legal advice.

Regulation

In recent years, there has been a move towards regulating the use of AI, which also affects the use of legal chatbots.  There is a need for transnational regulation, but thus far, each region just does its own thing.

In the US, we are confronted with fragmented regulation. The US lacks a comprehensive federal AI law. As a result, regulation is piecemeal: we are dealing with a) state-level initiatives and b) professional (ethical) conduct rules that guide how lawyers can use AI. When it comes to legal chatbots specifically, there is a requirement for professional oversight. In other words, chatbots cannot independently practice law, and human supervision is required to avoid unauthorized practice and to ensure accuracy. And of course, law firms must consider privacy and security when using legal bots. Compliance with privacy laws is essential, especially when handling sensitive client data.

It is worth noting that the FTC (Federal Trade Commission) has made clear that bots cannot market themselves as “robot lawyers” or a substitute for licensed counsel without substantiation. Its 2024 enforcement against DoNotPay (a consumer rights bot we discussed in previous articles) resulted in a $193,000 penalty and strict advertising restrictions. This FTC ruling is widely cited as the line in the sand for consumer legal AI claims.

Furthermore, the American Bar Association’s first formal opinion on generative AI (Formal Opinion 512, 2024) says lawyers must a) understand the capabilities and limits of AI, b) protect confidentiality, c) supervise outputs, and d) be candid with courts and clients. They do not need to be “AI experts,” but they can’t delegate professional judgment to a bot. Several bar associations and courts have issued similar guidance.

The UK relies on flexible, sector-specific laws and regulation, with a focus on transparency, explainability, and data protection (UK GDPR). Add to that, that legal professionals must ensure chatbots comply with professional ethical standards, including confidentiality and competence.

In the EU, we find regulation on both the EU level, as well as on the national level. On the EU level, the GDPR and the EU AI Act are the most important regulations. The GDPR has strict data privacy requirements which also apply to chatbot operations, especially with sensitive legal data. The EU AI Act introduces risk-based regulation, with high-risk applications (like legal advice) facing stricter requirements for transparency, accuracy, and human oversight.

Apart from the EU regulations, we also find that some National Bar Associations have issued their own regulations. As a result, in some countries only licensed lawyers can provide legal advice. This effectively limits the chatbot scope and/or requires professional supervision.

Legal chatbots for consumers

In previous articles on legal chatbots, we mainly discussed legal chatbots for consumers. What they all have in common is that they facilitate access to legal information. They democratize legal knowledge, making it more accessible to the public. (Links in the introduction). Overall, there still is a steady rise in chatbot use for routine legal tasks, especially among consumers and SMEs.

Legal chatbots for law firms

Apart from chatbots for consumers, in recent years we have also witnessed an increase in the number of legal chatbots for law firms. What are they used for?

  • Automation of routine tasks: chatbots automate legal research, contract review, and administrative work.
  • Document automation: bots are assisting lawyers with the creation and review of standard legal documents.
  • Legal research: AI chatbots can scan and summarize large volumes of legal documents and precedents rapidly.
  • Client engagement and intake: they are also used to handle initial queries, provide information, and schedule appointments, and they can direct clients to appropriate services or professionals.
  • Provide a better consumer experience: some law firms use their own legal chatbots to offer consumer services. By doing so, they enhance accessibility in areas like small claims, tenancy issues, and basic legal advice.

Conclusion

Legal chatbots have become an essential part of legal services in the US, UK, and Europe. Big law firms and routine legal services have been the quickest to adopt these technologies, but now we’re seeing more tools that help everyday people access legal help.

Regulatory frameworks are evolving rapidly, with the EU leading in comprehensive risk-based regulation, the UK favouring sector-specific guidance, and the US maintaining a fragmented, state-driven approach. Across all regions, the focus is on balancing innovation with ethical, professional, and data privacy safeguards.

At present, the US is still leading the way when it comes to legal chatbots. Most research/drafting bots originate in the U.S. (Thomson Reuters, Lexis, Harvey, Bloomberg). The UK on the other hand, is presenting itself as a contract-review hub: tools like Luminance and Robin AI grew out of the U.K.’s startup ecosystem. Continental European firms use a mix of U.S./U.K. platforms under GDPR controls, but also homegrown tools like ClauseBase and Legito for contract/document automation.

 

Sources:

 

Legal issues with stablecoins

In the previous article, we talked about what stablecoins are, why they matter, and what different types of stablecoins there are. In this follow-up article, we look at the main legal issues. There are qualification issues with stablecoins. There are new regulatory frameworks. We also discuss some other risks and legal issues with stablecoins.

Qualification issues with stablecoins

The legal qualification of stablecoins remains one of the most debated issues, as they do not fit neatly into existing legal categories. The core challenge lies in determining whether stablecoins should be treated as money, securities, commodities, or something else entirely. This classification has significant implications for which regulators have jurisdiction, and which legal rules apply. In many jurisdictions, a key issue is whether a stablecoin qualifies as a security.

In the European Union, the Markets in Crypto-Assets Regulation (MiCA) resolves this ambiguity to a large extent by creating new categories specifically for stablecoins: “e-money tokens” and “asset-referenced tokens”. E-money tokens are those that are pegged to a single currency and resemble traditional electronic money under the E-Money Directive. Asset-referenced tokens are broader and can include tokens backed by baskets of currencies or commodities. This approach avoids trying to fit stablecoins into outdated categories like securities or commodities and instead regulates them on their own terms.

In the UK, the Financial Conduct Authority (FCA) does not generally treat fiat-backed stablecoins as securities unless they exhibit investment characteristics. However, the upcoming regulatory framework under the Financial Services and Markets Act 2023 will grant the Bank of England and FCA more tools to supervise stablecoins used for payments. At present, August 2025, they have not published any regulations yet.

In the United States, the Securities and Exchange Commission (SEC) has suggested that certain stablecoins, particularly those offering interest-bearing features or tied to investment mechanisms, may fall under the definition of securities. However, fiat-backed payment stablecoins like USDC or USDP, which simply maintain a 1:1 peg to a currency and do not generate returns for holders, are more often considered outside the scope of securities regulation. At the same time, the Commodity Futures Trading Commission (CFTC) has taken the position that some stablecoins may qualify as commodities. In a 2023 enforcement action, the CFTC referred to tethered assets like USDT as commodities under the Commodity Exchange Act. This has added to the regulatory uncertainty in the U.S., where overlapping authorities and inconsistent classifications have left issuers and users in legal limbo.

In essence, the legal qualification of stablecoins hinges on their structure and function. If they are used for payments and are fully backed by fiat currency reserves, they are more likely to be treated as payment instruments or e-money. If they are algorithmic, generate returns, or have speculative components, they may fall under securities or commodities laws. Regulatory frameworks are required to resolve the ambiguity and uncertainty stablecoin issuers face. Which brings us to …

Regulatory Frameworks

These days, the regulation of stablecoins is rapidly evolving. Regulatory initiatives focus on concerns about consumer protection, financial stability, and the risks of unregulated digital assets. Both the European Union and the United States have recently introduced or implemented significant legislative frameworks to address these concerns.

As mentioned above, in the European Union, stablecoins fall under the Markets in Crypto-Assets Regulation (MiCA). MiCA was formally adopted in 2023 and began phasing in from June 2024. MiCA distinguishes between different types of crypto assets. It introduces specific provisions for “e-money tokens” (which are pegged to a single fiat currency) and “asset-referenced tokens” (which may be backed by a basket of assets or commodities). Issuers of these stablecoins are required to obtain authorization from national competent authorities and must meet stringent governance, capital, and reserve requirements. MiCA also imposes obligations on crypto-asset service providers, ensuring oversight of issuance, custody, and trading. The European Central Bank has highlighted the importance of this framework to prevent the fragmentation of the digital finance market and to protect consumers.

In the United States, after years of regulatory ambiguity, Congress has recently made progress toward a unified approach. In July 2024, the Clarity for Payment Stablecoins Act was passed by the House Financial Services Committee and gained bipartisan traction. This bill focuses specifically on payment stablecoins, such as those issued by Circle (USDC) and Paxos (USDP), and introduces a clear licensing regime. Under this legislation, stablecoin issuers must either be state-licensed nonbank entities or federally approved institutions regulated by the Federal Reserve. The bill also imposes strict reserve backing requirements, limits on rehypothecation of reserve assets, and detailed disclosure obligations to increase transparency. In July 2025, the Genius Act – the first federal regulatory framework for stablecoins – was passed in Congress. It creates a new licensing regime for payment stablecoin issuers and is the first major crypto-related legislation to be passed by both chambers of Congress. The bill was signed into law on 18 July 2025.

Regulators in both areas understood that stablecoins might have a big impact once they become widely used. In the EU, MiCA includes special oversight mechanisms for “significant” stablecoins, allowing the European Banking Authority to step in. Similarly, in the U.S., the President’s Working Group on Financial Markets believes the federal government needs to regulate companies that issue stablecoins, especially the big ones that process lots of payments.

Outside the EU and U.S., countries like Japan and the UK are also catching up. Japan already passed a law in 2022 that allows only licensed banks and trust companies to issue stablecoins, while the UK’s Financial Services and Markets Act 2023 granted the Bank of England new powers to oversee systemic digital settlement assets, including fiat-backed stablecoins.

Other risks and legal issues with Stablecoins

Apart from classification and regulatory frameworks, stablecoins raise several other legal issues and risks. These have to do with financial stability, consumer protection, monetary sovereignty, and data governance. These concerns are particularly significant given the potential for stablecoins to scale rapidly across borders and integrate with mainstream financial services.

A first issue is the operational risk, especially the risk of technical failure, cyberattacks, or fraud within the stablecoin infrastructure. Since most stablecoins rely on centralized issuers or custodians, the reliability of reserve management and smart contracts is critical. A failure in these systems could cause a loss of peg, mass redemptions, or loss of user funds. In the previous article we mentioned the TerraUSD’s collapse in 2022, which was algorithmic stablecoin. Its collapse exposed how vulnerabilities in design can destabilize not only a single token but also the broader market. The US Financial Stability Board (FSB) has emphasized the importance of robust governance and risk management frameworks to prevent such collapses. Its October 2023 report outlines these concerns in detail.

Another legal concern is redemption rights. Users need clear, enforceable rights to redeem stablecoins for fiat currency on demand. In practice, many stablecoin issuers include disclaimers or reserve the right to delay or deny redemptions under certain conditions. This raises questions about contractual enforceability and consumer protection, particularly in jurisdictions without clear legal protections for token holders. The IMF has raised similar concerns in its global policy papers, especially when stablecoins operate across borders where legal remedies may be unclear or unenforceable.

There are also anti-money laundering (AML) and counter-terrorist financing (CTF) concerns. Stablecoins offer a relatively stable value and fast, borderless transfers, which make them attractive for illicit use. Many stablecoin platforms operate with limited KYC (Know Your Customers) procedures or allow anonymous transfers via decentralized protocols. Regulators have warned that this can undermine AML frameworks and create enforcement gaps.

Another major legal issue is monetary sovereignty. Central banks have raised concerns that widespread use of privately issued stablecoins could erode control over national currencies and monetary policy, especially in developing countries. If a stablecoin pegged to the US dollar becomes a dominant means of payment in another country, it can cause de facto dollarization and limit a central bank’s ability to manage inflation or respond to economic shocks.

Finally, data privacy and surveillance pose emerging legal and ethical challenges. Stablecoin providers often collect and process sensitive personal and financial data. In jurisdictions like the EU, such processing is subject to the General Data Protection Regulation (GDPR). But questions remain about how decentralized systems can comply with data minimization, user consent, and the right to erasure. Moreover, law enforcement access to stablecoin transaction data creates a tension between privacy rights and regulatory compliance.

Together, these issues show that the legal issues regarding stablecoins involves much more than just classification or licensing. Since stablecoins touch on financial law, contracts, data protection, monetary policy, and consumer rights, both companies and users face significant legal risks until we get better, more coordinated regulations worldwide.

 

Sources:

An introduction to stablecoins for lawyers

Stablecoins are in the news on a daily basis. Major banks are considering releasing their own stablecoins. Governments have started regulating them. On 18 July 2025, the US government, e.g., signed a law to create a regulatory regime for dollar-pegged stablecoins. So, this is the first of two articles on stablecoins. In this article, we answer the following questions: What are stablecoins? Why do they matter? What are the different types of stablecoins? And how do they relate to other cryptocurrencies? In the next article, we will then focus on the legal aspects of stablecoins.

What are stablecoins?

Wikipedia describes a stablecoin as “… a type of cryptocurrency where the value of the digital asset is supposed to be pegged to a reference asset, which is either fiat money, exchange-traded commodities (such as precious metals or industrial metals), or another cryptocurrency.”

So, what are we talking about? Stablecoins are a category of digital assets that are specifically designed to maintain a stable value. They achieve this by being pegged to a reference asset, typically a fiat currency like the US dollar or euro. Traditional cryptocurrencies such as Bitcoin or Ethereum are known for their price volatility. Stablecoins on the other hand aim to offer the benefits of blockchain technology, such as fast, borderless, and programmable transactions, but without the associated price fluctuations.

Why do they matter?

Stablecoins have become the backbone of many digital asset transactions. They offer the benefits of cryptocurrency, like speed, low cost, and global reach, while avoiding its biggest flaw: volatility. With stable value, they make it possible to trade, lend, borrow, and transfer money on blockchain platforms without worrying about large price swings. This makes them attractive not just to crypto users, but also to businesses, fintech companies, and even central banks.

Stablecoins are already widely used on cryptocurrency exchanges as quote currencies (e.g., USDT or USDC pairs). They enable traders to move in and out of volatile assets without relying on fiat bank transfers. In cross-border payments, stablecoins enable near-instant remittances with lower fees compared to traditional money transfer services. This is especially the case in regions with unstable banking systems. Additionally, stablecoins are used for borrowing, lending, and staking. They allow users to earn interest or participate in decentralized governance while maintaining exposure to a relatively stable asset. They have also become critical tools for avoiding capital controls and hyperinflation in countries with unstable currencies.

Let’s put things in perspective: in 2024, over 70% of trading volume on major cryptocurrency exchanges was settled using stablecoins. Beyond trading, stablecoins are now used for cross-border payments, employee payroll, and remittances. Some stablecoins are accepted by merchants and payment processors, integrating them into the real economy. Stablecoins have even seen growing use in countries with unstable currencies, where they serve as a hedge against inflation. This trend is expected to grow, especially in emerging markets.

Notably, the total supply of stablecoins has grown significantly: as of mid-2025, the combined market capitalization of major stablecoins exceeds 150 billion USD.

What are the different types of stablecoins?

Wikipedia mentions 3 different types, but the literature mentioned in the sources below also discuss a fourth one. They can be grouped into distinct types based on how they maintain their price stability. Each of these approaches reflects a different mechanism for achieving price parity with a target asset.

Fiat backed

Fiat-collateralized stablecoins are backed 1:1 by reserves held in traditional financial institutions. These reserves can include bank deposits, short-term government securities, or other low-risk instruments. When a user purchases a stablecoin, the issuer stores an equivalent amount of fiat currency in reserve. Examples of this type include USD Coin (USDC), Tether US (USDT) and Tether EU (EURT). These coins are generally considered the most stable, though they rely heavily on the trustworthiness and transparency of the issuer. Concerns over reserve backing have occasionally led to regulatory scrutiny, particularly in Tether US’ case. For example, in 2021, the US Commodity Futures Trading Commission (CFTC) fined Tether $41 million for misleading statements about its reserves.

Cryptocurrency backed

Crypto-collateralized stablecoins are backed by other cryptocurrencies rather than fiat. These stablecoins are typically overcollateralized to account for the volatility of their underlying assets. A prominent example is DAI, which is issued by the MakerDAO protocol and backed by Ethereum and other assets deposited in smart contracts. Users lock up collateral that exceeds the value of the DAI issued, helping to maintain its dollar peg. This model removes the need for centralized custodians but introduces complexity and vulnerability during market downturns, as seen in March 2020 when rapid price drops led to liquidations within the Maker system.

Algorithmic stablecoins

Algorithmic stablecoins use smart contracts and market incentives to maintain a peg without relying on collateral. Instead of being backed by assets, these coins regulate supply algorithmically. When the price drops below the peg, the protocol may reduce supply by incentivizing users to burn coins. If the price rises above the peg, new coins are minted. This model is theoretically elegant but has proved highly unstable in practice. The most notorious failure in this category was TerraUSD (UST), which lost its peg in May 2022 and collapsed entirely, wiping out over $40 billion in market value. This event highlighted the systemic risks algorithmic stablecoins pose when trust and liquidity disappear.

Commodity backed

In addition to these main types, a niche category exists for commodity-backed stablecoins. These are pegged to physical goods such as gold or oil. Paxos Gold (PAXG), for example, is backed by physical gold stored in London vaults, allowing users to own fractionalized gold on the blockchain. These stablecoins combine the technological advantages of digital tokens with the perceived safety of tangible assets.

How they relate to other cryptocurrencies

In the broader ecosystem of cryptocurrencies, stablecoins play an essential role. Traditional cryptocurrencies like Bitcoin are often used as speculative assets or long-term stores of value (sometimes called “digital gold”). Stablecoins on the other hand serve as more practical tools for day-to-day transactions, on-chain liquidity, and as units of account within decentralized finance (DeFi) platforms. Their price stability makes them an ideal medium for settling trades, providing collateral, and earning yield in lending protocols. So, they function as a much needed bridge between volatile cryptocurrencies and traditional financial systems, as they allow users to store and transfer value on blockchain networks with a degree of price predictability.

 

Sources:

Using AI for legal research

How safe is using AI for legal research? On the one hand, AI is making quick progress and keeps getting better. The arrival of a new generation of AI agents will only speed up that process. But on the other hand, we keep getting headlines where law firms are being fined for using AI that referred to non-existing legislation and jurisprudence. In this article, we look at a) how AI is reshaping legal research, b) at the risks and accuracy concerns of using AI in legal research, c) at possible mitigation strategies. Finally, d) we look at using AI for legal research on non-US law.

How is AI reshaping legal research – benefits

AI has been having a significant impact on legal research, and generative AI has certainly sped up that process. Many law firms are using generative AI to assist them with their legal research. It is easy and convenient, as they can ask questions in natural language, rather than having to study some query language. And now that most generative AIs have started offering more advanced research agents that can provide sources, AI has become even more attractive. So, AI is significantly reshaping legal research in several impactful ways. Most of those are beneficial.

One of the most noticeable changes is the enhanced speed and efficiency it brings. AI tools are capable of sifting through vast volumes of legal data in seconds, identifying relevant information much faster than a human could. This efficiency saves lawyers considerable time and resources.

Beyond speed, AI can also improve the accuracy and depth of insight in legal research. By analysing large datasets, AI can detect patterns and extract insights that might go unnoticed by human researchers. It can also flag potential errors or inconsistencies in legal documents, helping to ensure the accuracy and reliability of the information used. But caution is needed, as we will discuss below.

Another major advantage is the broader access to legal information that AI provides. These tools can draw from a wide array of sources, including statutes, case law, legal journals, and specialized databases. This comprehensive reach allows lawyers to develop a fuller understanding of the legal issues they face.

Natural Language Processing (NLP) and machine learning further enhance AI’s capabilities in the legal field. NLP enables AI to comprehend the meaning within legal texts. This allows it to extract key information and identify relevant precedents. Meanwhile, machine learning algorithms can analyse historical case data to predict outcomes. This gives lawyers valuable insights into the strengths and weaknesses of their cases.

AI is also increasingly being integrated into established legal research platforms. This integration improves the efficiency and comprehensiveness of legal research.

However, as AI becomes more embedded in legal practice, responsible usage is essential. Ensuring accuracy, upholding ethical standards, and maintaining regulatory compliance are critical. Lawyers must treat AI as a supportive tool rather than a standalone solution, and it remains vital to verify any information generated by AI systems. Because there are still considerable risks involved in using AI for legal research.

Risks and accuracy concerns of using AI in legal research

In a recent case in California, a judge found that nine out of the twenty-seven quoted sources were non-existent. The two law firms involved (one had delegated research to the other) were fined 31 000 USD. If you follow the news on legal AI, it is a common problem. Apart from that, AI still often is biased, too. Let’s have a closer look at both issues.

Accuracy concerns

AI systems can produce inaccurate, incomplete, or misleading legal information. This is particularly the case when dealing with complex cases, with nuanced legal concepts, or when legislation or jurisprudence has changed recently.

Even worse are AI “Hallucinations”. As witnessed in the example above, AI can generate plausible but factually incorrect information. It is therefore crucial to verify all AI-generated output against credible sources. The Californian example above highlights how this is a serious risk, as one in three sources that were quoted did not exist.

The example also illustrates the risk of reliance on AI without oversight. You cannot assume the AI knows what it’s doing. Over-reliance on AI without thorough human review can lead to errors that compromise case outcomes and erode client trust.

Bias and ethical concerns

In previous articles, we pointed out that AI inherits and reflects all the biases of the data pool that it was trained upon. This can lead to unfair or discriminatory legal outcomes. So, bias in AI algorithms is a first concern.

Many AI systems cannot explain how they reached their conclusions, or they fail to mention sources. Lack of transparency and accountability, therefore, is a second issue. The algorithms used by AI systems can be opaque, making it difficult to understand how decisions are made and hold the AI system accountable.

Clients may not fully understand the role of AI in their legal representation. This can easily undermine their trust. Clear communication is essential.

As with any online tool lawyers use that share client information, there are data privacy and confidentiality concerns.

Finally, there is the aspect of professional responsibility. Lawyers have a duty to supervise AI-generated work, ensuring it is accurate and ethical. They also must communicate with clients about the use of AI tools.

Mitigation strategies

It is possible to counteract these risks by implementing some mitigation strategies.

  • Always verify AI-generated results against credible legal databases and primary sources.
  • Actively oversee and review AI-generated work to ensure accuracy, as well as ethical compliance.
  • Be transparent with clients about the use of AI tools.
  • Implement robust data security measures to protect client information and comply with privacy regulations.
  • Adhere to ethical guidelines and professional responsibilities when using AI in legal practice.

What about using AI for legal research on non-US law?

Most of the advances in generative AI are being made in the US, and the EU is catching up. How well do the generative AI platforms perform when it comes to non-US law? And are they available in other languages?

Let’s start with the language question: all the major generative AI engines are available in Dutch and French.

Then, what about non-US law? We did some test with European Law, more specifically about GDPR, and overall, these tests went well. We did not test on recent legislation or jurisprudence.

We also briefly did some tests with Belgian law. We thought art. 1382 of the Civil Code would be an interesting test case, given that it was recently replaced by a new book 6 on extra-contractual liability. We ran the test on ChatGPT, CoPilot, Gemini, Claude, Perplexity, Grok, and you.com. Only four out of seven pointed out that art. 1382 CC had been replaced. They were ChatGPT, CoPilot, Gemini and Grok. The other three, Claude, Perplexity, and You.Com, all did not mention book 6 on extracontractual liability at all.

So, while caution and supervision are already needed for US and EU law, it is even more the case for the law of EU member states, where several generative AI platforms were not (yet) aware of recent legislation.

Conclusion

Using AI for legal research holds promise, but supervision is still very much needed. The above examples show how they can still hallucinate, and that they may not be aware of recent changes in legislation or jurisprudence.

 

Sources:

 

An introduction to online dispute resolution

In a recent article, we discussed online courts and how the pandemic was a catalyst for their adoption. Similarly, the pandemic also was a catalyst for online dispute resolution (ODR) systems. So, that is what we are having a look at in this article. We answer the following questions: What is online dispute resolution? What are the methods of online dispute resolution? What are the benefits? What are the limitations and risks?

What is online dispute resolution?

Wikipedia explains it well: “Online dispute resolution (ODR) is a form of dispute resolution which uses technology to facilitate the resolution of disputes between parties. It primarily involves negotiation, mediation or arbitration, or a combination of all three. In this respect it is often seen as being the online equivalent of alternative dispute resolution (ADR). However, ODR can also augment these traditional means of resolving disputes by applying innovative techniques and online technologies to the process.

So, online dispute resolution systems are digital platforms that facilitate the resolution of disputes without requiring parties to meet in person, and without lawyers. They started appearing as soon as online transactions did. Many online retailers preferred to have a system to settle disputes online, rather than having to go through lawyers and the courts. So, at first, online dispute resolution was to settle online disputes, and mainly with online retailers. But because it proved to be an effective way to solve disputes, it soon started being used for other disputes as well. And its popularity increased during the pandemic. In recent years, ODR systems have evolved significantly and represent a major shift in how conflicts can be resolved.

What are the methods of online dispute resolution?

Initially, online dispute resolution happened almost entirely outside of the courts, and by consensus. By now, we mainly have two categories of online dispute resolutions. They are consensual and adjudicative solutions.

Consensual methods

Consensual ODR methods focus on mutual agreement between the parties.

Online negotiation is one of the simplest forms. It can be entirely automated, with software platforms facilitating the exchange of offers. It can also be assisted, involving digital communication tools such as email, chat, or video conferencing. This method is especially common for minor disputes, like those arising from online marketplaces.

Online mediation involves a neutral third party (the mediator) who assists the disputing parties in reaching a voluntary settlement. The mediator does not impose a solution but helps clarify issues, encourage dialogue, and explore possible compromises. Mediation is well-suited for cases where the parties have an interest in maintaining a relationship, and it benefits from the confidentiality and flexibility that online platforms can offer.

Hybrid systems are a combination of negotiation and mediation, using structured phases and automation to encourage resolution. For example, platforms like eBay and PayPal use hybrid ODR processes that automatically prompt parties to negotiate, and if that fails, escalate the case to human mediators.

Adjudicative methods

On the adjudicative side, ODR includes mechanisms where a decision is made by a third party or authority, and this decision is binding.

Online arbitration replicates traditional arbitration processes, with a neutral arbitrator reviewing submissions and rendering a binding decision. This method is commonly used for commercial or contractual disputes and may involve document-only reviews or virtual hearings.

Another form of adjudicative ODR is platform-based adjudication. These systems are often used by e-commerce websites. First, users submit their evidence, such as communication logs or transaction record. Then, a decision is made by a platform official or through an automated system. This approach is efficient but can sometimes lack transparency or the option for appeal.

Finally, some jurisdictions are now developing online courts that offer digital litigation options. These systems allow parties to file claims, submit evidence, and receive rulings entirely online. The UK’s online civil money claims court is one example of a government-led effort to modernize access to justice.

What are the benefits?

Online Dispute Resolution (ODR) provides several key advantages over traditional conflict resolution methods.

A first benefit that already was mentioned before it the efficiency of ODR. Digital ODR platforms allow to streamline the processes. They do so by automating scheduling, communication, and document submission, which significantly reduces the conflict resolution time compared to court proceedings.

Another benefit is accessibility. Disputes are resolved online, which mean parties do not have to travel to meet in person to resolve their disputes. This is especially beneficial for cross-border commerce.

ODR systems also are cost-effective. Online dispute resolution reduces expenses such as travel, venue hire, legal representation, and court fees. Online dispute resolution makes dispute resolution more affordable, particularly for low-value disputes.

Online dispute resolution offers a higher degree of convenience. Parties can engage in resolution remotely, at their preferred time, using their own devices. It makes it ideal for small businesses and individuals with busy schedules.

ODR is less adversarial. Virtual mediation or negotiation tend to create a more constructive atmosphere. The fact that parties are trying to avoid going to court helps them express themselves more openly and makes it easier to find common ground.

The final benefits of ODR are transparency and accountability. The entire dispute resolution process is recorded in digital records, which provide an auditable trail of communications. This ensures fairness, prevents misunderstandings, and aids enforcement or appeals.

What are the limitations and risks?

While online dispute resolution offers many benefits, it also has several limitations that may affect its fairness and effectiveness.

For starters, there is an aspect of digital inequality. Not all users have equal internet access, suitable devices, or digital literacy, creating imbalances between parties, especially in cross-border disputes.

We also must take procedural unfairness into account. Some platforms use opaque processes or automated decisions without clear explanations or appeal options. This can easily make users feel like the system lacks empathy.

As with all online services, there are always privacy and security concerns. Sensitive data is collected and exchanged, and inadequate cybersecurity measures can lead to data breaches or identity theft.

Another possible problem with ODR are the enforceability issues. While online arbitration can be binding, negotiation or mediation outcomes may require traditional legal enforcement, reducing ODR’s time and cost benefits.

Experience has also taught that ODR is unsuitable for complex disputes. Emotionally sensitive or legally intricate cases may need human judgment and face-to-face interaction that ODR cannot fully replicate.

When using an ODR platform, there also is a risk of platform bias. If the platform is controlled by one disputing party, neutrality and fairness may be compromised, whether intentionally or through algorithmic influence.

Finally, there also is the issue of lack of regulation. Many ODR systems operate in legal grey areas. This makes it difficult to ensure consistent quality, transparency, and accountability.

Conclusion

Online dispute resolution presents clear advantages in terms of accessibility, efficiency, and affordability. Yet, at the same time, it is not without significant risks. There are many issues such as digital inequality, procedural opacity, privacy concerns, enforcement difficulties, limited suitability for complex cases, potential platform bias, and regulatory uncertainty. All these risk and limitations point to the need for a cautious implementation. Ensuring that ODR complements rather than replaces traditional legal mechanisms – especially in sensitive or high-stakes disputes – is essential for maintaining trust and fairness in its process.

PS: the EU had its own ODR platform, but that has been discontinued as of 20 March 2025, in favour of a more encompassing alternative dispute resolution approach.

 

Sources:

Unbundled Legal Services

Unbundled legal services are gaining in popularity. In this article, we explore the following questions: What are unbundled legal services? What are the benefits? What are the risks of using unbundling legal services?

What are unbundled legal services?

So, what are we talking about? Unbundled legal services (also called “limited scope representation” or “à la carte legal services”) is a legal service delivery model where attorneys provide specific, clearly defined services for discrete portions of a client’s legal matter rather than handling the entire case from start to finish. In traditional full-service representation, an attorney handles everything related to a case. With unbundled services, clients can pick and choose which specific tasks they want professional help with, while handling other aspects themselves.

The concept was introduced by Forrest S. Mosten, in his book, Unbundling legal services: a guide to delivering legal services a la carte, which was published in 2000. The practice of unbundling legal services first took off in family law. Since then, the use of unbundled legal services has been increasing. This growth is driven by a rising demand for affordable legal solutions, a growing number of self-represented litigants, and clients’ desire for more control over their legal affairs. Lawyers have responded with greater acceptance and support for this model, recognizing its potential to bridge the justice gap.

By now, unbundled legal services have become quite common, and they are expected to become even more so. In a way, one could also label all the services offered by Alternative Legal Service Providers (ALSPs) unbundled legal services. They do exactly the same thing: they focus on – and specialize in – one aspect of the legal process. So, from that perspective, it’s not just the clients but also the lawyers who benefit from unbundled legal services, where they can outsource specific tasks, either to ALSPs or other lawyers.

These days, we can find plenty of examples of unbundled legal services:

  • Legal coaching (giving advice on how clients can represent themselves)
  • Transactional guidance
  • Negotiation advice for a specific matter
  • Limited litigation and court appearances, like a court appearance for a specific hearing only
  • Document preparation or review, and document drafting (think of contracts, wills, etc.)
  • Agreement review
  • Case evaluation
  • Settlement evaluation
  • Legal research on a particular issue
  • Consultation on strategy or specific legal questions

What are the benefits?

The main purpose of unbundled legal services is to save costs for the client. So, for clients, this model offers a cost-effective solution. It makes legal assistance more accessible to those who might not afford comprehensive representation. As such, unbundled legal services also improve access to justice. And they provide greater control and involvement in the clients’ legal matters, as clients can choose which aspects to handle independently and which to delegate to an attorney.

For attorneys, offering unbundled services can expand their client base, enhance job satisfaction by tailoring services to specific client needs, and improve financial outcomes through upfront payments.

For both parties, unbundled legal services offer greater flexibility.

What are the risks of using unbundling legal services?

While unbundled legal services increase access to justice and affordability, they come with several notable risks for both attorneys and their clients.

For attorneys, there are significant potential malpractice concerns: if scope limitations aren’t clearly documented, this may result in a failure of a client to comply with their responsibilities, either through inability or a lack of understanding of the expectation. Many lawyers also struggle with maintaining appropriate boundaries when clients inevitably want assistance beyond the agreed scope. There’s an additional challenge of providing competent representation with only limited involvement in a case. Attorneys maintain an ethical duty too to verify that clients are competent to handle portions of their own case. Add to that that possible negative outcomes could reflect poorly on the attorney, even when those outcomes stem from the client’s handling of the case.

For clients, there’s often a misunderstanding about the limited nature of representation. Many clients struggle with inadequate handling of case aspects outside the attorney’s scope and may miss critical deadlines or procedural requirements. Complex legal matters can be difficult to navigate without continuous guidance, potentially leading to worse outcomes than with full representation.

Several mitigation strategies can help address these risks. Clear, written agreements that explicitly define the scope of services are essential. Thorough documentation of all communications and advice provides protection for both parties. Regular assessment of client capacity to handle their portions of the case helps prevent problems. Establishing mechanisms to modify the scope when necessary, provides flexibility. And thorough client education regarding exactly what the attorney will and won’t handle helps set proper expectations.

Conclusion

Unbundled services have become increasingly popular as the legal market evolves to meet changing client demands for more cost-effective and flexible legal solutions. They represent a significant shift from the traditional model of comprehensive representation toward more client-centred, accessible legal services. They offer a flexible and affordable alternative to traditional full-service representation, benefiting both clients and attorneys. Their growing prevalence and demand reflect a shift towards more accessible and client-centred legal solutions. But while unbundled services can be valuable when implemented properly, they also require careful management of expectations and boundaries to protect both attorneys and clients.

 

Sources:

Insights on Legal IT, CMS and KMW