All posts by Manuel Lamiroy

An introduction to legal analytics

These days, many law firms are using legal analytics. We briefly discussed them in the past in the article on Machine Learning. In this article, we answer the following questions: What are legal analytics? What are the prominent types of legal analytics? What are the benefits, and what are the challenges of legal analytics?

What are legal analytics?

Legal analytics refers to the use of data analysis, statistics, and technology to extract insights from legal information. In essence, it’s applying data science to the legal field. Legal analytics draws from a range of sources to identify patterns and trends. These include court records, case outcomes, judicial decisions, litigation histories, contracts, regulatory filings, and other legal documents. It enables legal professionals to predict litigation outcomes, evaluate judge behaviour, assess risk, and optimize legal operations.

Types of legal analytics

The relevant literature mentions many different types of legal analytics. These are the most prominent ones.

Litigation analytics examines court behaviour and procedural data – including judges, opposing counsel, and motion outcomes – to inform strategic case planning. A key subset, judicial analytics, focuses on individual judges’ historical rulings, such as how often they grant motions, typical damage awards, and case timelines. Law firms use these insights to gauge their chances of success in specific courts or before particular judges. This helps them tailor arguments and manage forum risk.

Contract analytics uses AI to extract structured information from large volumes of agreements. It identifies clauses, deviations, risk exposure, and compliance gaps. It is widely used in due diligence, regulatory compliance, and large-scale commercial transactions. It is often embedded in document management or contract lifecycle management systems. Corporate and transactional analytics are used more specifically in due diligence regarding Mergers and Acquisitions. They assess a target company’s litigation exposure, regulatory history, contract obligations, and legal risk profile.

Predictive analytics are used to estimate the probability of outcomes based on historical data. While predictions are never certain, they can provide useful probability ranges to assist with settlement strategy and risk management. Such tools can, e.g., estimate the likelihood that a motion to dismiss will succeed before a specific judge, or it can project a likely damages range in a given type of commercial dispute.

Another category of legal analytics focuses on operational efficiency. It is used for competitive intelligence and firm management. Attorney and firm analytics can profile opposing counsel or potential hires based on their track record. This includes things like case history, success rates, favoured arguments, and courtroom behaviour, which is useful for lateral hiring decisions. Beyond this, firms also analyse billing patterns, matter duration, profitability by practice area, and client retention trends, reflecting broader developments in legal operations and law firm management software.

Regulatory and compliance analytics monitor regulatory activity, enforcement actions, and agency decisions. It helps organizations understand compliance risk and anticipate how regulators might act in a given area.

Intellectual property analytics track patent filings, litigation trends, licensing activity, and the behaviour of patent assertion entities (also known as “patent trolls”). These analytics are widely used in tech and pharma industries.

Apart from these, there are many other types, like docket and case management analytics, legal spend and operations analytics (often as part of law firm analytics), descriptive, diagnostic, and prescriptive legal analytics, etc.

What are the benefits?

The articles mentioned below list several benefits. Here are the most common ones.

Better decision-making: Perhaps the most fundamental benefit of legal analytics is replacing gut instinct with evidence. Legal analytics help lawyers move beyond intuition and anecdote to assess procedural risks more accurately. This allows lawyers to better evaluate whether to file, settle, move for summary judgment, or adjust procedural tactics. In other words, legal analytics allows for more informed choices about litigation strategy, settlement timing, forum selection, and case valuation.

Improved outcomes: By analysing thousands of similar cases, legal analytics helps lawyers and clients form realistic expectations about how a case is likely to resolve. This reduces surprises, manages client expectations, and helps litigation funders and insurers price risk more accurately.

Improved risk management: Clients expect data-driven risk assessments, and legal analytics delivers exactly that. It can work proactively, flagging risks before they escalate: unusual contract clauses, emerging enforcement trends, or jurisdiction-specific litigation exposure. Even a rough probability range helps clients make smarter business decisions.

Faster due diligence: Contract analytics can process large volumes of agreements rapidly. They can identify non-standard clauses, risk exposure, and compliance gaps. In transactions, this accelerates due diligence timelines significantly. The result is faster deal timelines, lower diligence costs, and a reduced likelihood of overlooked liabilities.

Competitive intelligence and differentiation: Legal analytics also provides a meaningful competitive edge. It allows lawyers to understand how opposing counsel argues, how a judge tends to rule, or how a particular court handles certain claim types. This gives legal teams strategic intelligence that was historically only available to lawyers with decades of local experience. This is particularly attractive to sophisticated corporate clients already accustomed to data analytics in finance, operations, and strategy. In some markets, the ability to provide data-backed insight is becoming a baseline expectation rather than a distinguishing luxury.

Cost efficiency: Legal analytics helps in-house legal departments benchmark outside counsel fees, identify billing inefficiencies, and forecast matter costs more reliably. This makes legal budgets more predictable and easier to justify to finance teams.

Supporting legal innovation: More broadly, legal analytics is driving a cultural shift in the legal profession toward greater transparency, accountability, and data literacy. It is pushing law firms and legal departments to operate more like modern businesses.

What are the challenges?

While the benefits are promising, legal analytics still faces many challenges.

Data quality and completeness: Legal analytics is only as reliable as the data it draws on, and that data is frequently incomplete. Court records vary enormously in how they are formatted and maintained across jurisdictions, many decisions are never published, and older records may not be digitised at all. Settlement data – often crucial for realistic outcome assessment – is typically confidential and therefore absent from public datasets. Available court data can differ significantly across jurisdictions and countries. Gaps in the data inevitably produce blind spots in the analysis.

Privacy and confidentiality concerns: Legal data is highly sensitive, and its use for analytics purposes raises serious privacy and confidentiality concerns. Feeding client communications, billing records, or contract repositories into third-party or cloud-based platforms creates cybersecurity exposure and risks touching on attorney-client privilege. Clients may demand assurances regarding data storage, cross-border transfers, and compliance with applicable privacy regimes. Robust data governance therefore becomes an essential consideration for any firm adopting analytics tools.

Interpretability and explainability: Legal analytics demands a level of statistical and technical literacy that is not traditionally part of legal education. Data does not interpret itself. Analytics can misread correlation as causation, misunderstand confidence intervals, or ignore sample size limitations. All of these can lead to flawed strategic decisions. This challenge is compounded by the fact that many predictive models function as black boxes: they produce outputs without clearly explaining why. In a legal context, reasoning and justification are fundamental. This lack of transparency is therefore a significant problem: lawyers and judges are trained to evaluate arguments, not algorithmic scores.

Algorithmic and systemic bias: A significant concern is that historical legal data frequently reflects systemic bias. Past outcomes may embody disparate treatment, whether in sentencing, enforcement patterns, or procedural rulings. And then analytics models trained on that data will replicate, and potentially amplify, those disparities. This raises serious ethical and jurisprudential concerns, particularly in criminal justice and regulatory contexts.

Regulatory and ethical uncertainty: The use of predictive tools in legal decision-making raises other unresolved ethical questions as well. Bar associations and courts are still grappling with how to regulate the use of AI and analytics in legal practice. There is limited clear guidance in most jurisdictions.

Cost and accessibility: Advanced legal analytics tools are expensive, and their costs extend beyond licensing fees to include integration, training, and internal process redesign. In practice, this means they are primarily accessible to large law firms and well-resourced corporations. This risks widening the gap between well-funded and under-resourced parties, which is the opposite of the access-to-justice potential that analytics theoretically offers. Adoption is further complicated by cultural resistance: senior practitioners may be reluctant to embrace data-driven tools if they perceive them as undermining professional judgment. And without firm-wide buy-in, analytics is likely to remain underutilised.

Legal systems vary dramatically in their transparency, digitisation, procedural structure, and publication practices. This jurisdictional fragmentation is a significant obstacle for legal analytics. Analytics tools that function well in one context may be far less effective in another. This is particularly the case when applied across civil and common law jurisdictions. This limits scalability and complicates cross-border application considerably.

Conclusion

Legal analytics have much to offer, but at present still face many challenges, as well.

 

Sources:

 

Client satisfaction of legal consumers – what and why

One of the legal technology predictions for 2026 is that the client experience becomes more important. In the past, we have already discussed the client-centred law firm. In this article we look at the related concept of client satisfaction, and how it applies to legal consumers. We discuss the following topics: What is client satisfaction? Why does it matter? What are key factors in client satisfaction? How can it be measured? How to improve client satisfaction?

What is client satisfaction?

In the context of professional services, client satisfaction means how well a service meets or exceeds what clients expect and need. Clients judge this based on the quality of work, how well you communicate, how quickly you respond, and their overall experience. Client satisfaction is often used to predict customer loyalty and whether they’ll continue doing business with the company.

For lawyers and law firms, client satisfaction is how positively clients view their legal services from start to finish. This includes clear communication, professionalism, acceptable outcomes, and how they’re treated throughout the process. It reflects not only the legal results but the entire client experience, i.e., how well the firm manages expectations and keeps clients informed and supported.

Why does it matter?

Client satisfaction is not only a measure of performance, but also a crucial strategy for gaining referrals, repeat business, and building a strong reputation. It affects a law firm in four ways.

Client satisfaction directly influences both the reputation and long-term success of a legal practice. Law firms that consistently meet or exceed client expectations build stronger trust with their clients. This in turn results in client loyalty: it encourages repeat engagements, and it increases the likelihood of positive word-of-mouth referrals and testimonials. These help attract new business without proportional marketing spend. Satisfied clients are also more likely to view the firm as credible and professional. This reinforces the firm’s brand and competitive positioning in the legal marketplace.

Client satisfaction also impacts operational and financial performance. Firms that prioritise clear communication, responsiveness, and personalised service often experience higher client retention rates and lower complaint volumes. This can reduce the time and costs associated with dispute resolution and improve billing realisation. Moreover, when clients are satisfied, relationships strengthen, which allows firms to better understand client needs. This then helps a law firm deliver legal solutions that genuinely align with client goals. This in turn strengthens trust and encourages sustainable, long-term engagement.

Next, we have client expectations in a changing legal market. Corporate and individual clients increasingly compare law firms not only to other firms, but to service providers in finance, consulting, and technology. Clients now expect responsiveness, predictability, and transparency as standard; they are no longer special advantages. Digital literacy, price sensitivity, and time pressure have changed what clients perceive as “good legal service.”

Client satisfaction is not merely a marketing concept, but a strategic and economic asset. High satisfaction results in client retention, cross-selling, reduced price pressure, and reputational strength. It contributes to long-term firm sustainability and competitive positioning.

What are key factors in client satisfaction?

The articles mentioned in the sources identify eight key factors.

Communication quality and accessibility: communication remains the single most cited driver of client satisfaction. It applies to clarity of advice, frequency of updates, and the ability of lawyers to translate legal complexity into actionable understanding. Accessibility is another important factor: clients appreciate availability outside traditional office hours, use of secure client portals, and responsiveness across channels such as email, video calls, and messaging platforms.

Transparency in pricing and value perception: clients often associate dissatisfaction not with high fees as such, but with unexpected fees or unclear billing logic. Clients prefer transparent fee structures, scoped mandates, budgets. Often, they also prefer alternative fee arrangements. The focus should be on perceived value: how firms can demonstrate that their advice, risk management, and outcomes justify the cost.

Process efficiency and client journey design: client satisfaction is strongly influenced by how “frictionless” it feels to work with a firm. This includes onboarding, conflict checks, document handling, turnaround times, and matter closure. In previous articles, we discussed the idea of mapping the client journey and treating legal service delivery as a process that can be designed, measured, and improved.

Use of technology to enhance the client experience: examples include document automation to reduce delays, matter tracking dashboards, AI-assisted research that shortens response times, and secure collaboration tools. The key question is how technology improves speed, accuracy, and transparency from the client’s perspective.

Empathy, trust, and relationship management: legal matters are often high-stakes and emotionally charged. Clients appreciate empathy, listening skills, and the ability to understand the client’s broader business or personal context. Trust is built not only through legal competence, but through consistency, honesty about risks, and realistic expectation management.

Feedback, measurement, and continuous improvement: a mature approach to client satisfaction treats it as something measurable rather than anecdotal. Make sure you include items like client feedback mechanisms, post-matter reviews, Net Promoter Scores (see below), and structured debriefs. Importantly, it should also address how firms close the loop by acting on feedback and communicating improvements back to clients.

Internal culture and incentives: client satisfaction is ultimately shaped by internal firm dynamics. Things like workload pressures, billing targets, and partner incentives influence client-facing behaviour. Firms that align internal rewards with long-term client relationships rather than short-term billable hours tend to achieve higher satisfaction and retention.

Risk, errors, and complaint handling: how a firm handles mistakes or disputes can matter more than the mistake itself. Handle complaints in a way that incorporates transparency, speed of response, accountability, and learning from errors. This is particularly relevant from a professional responsibility and reputational risk perspective.

How can it be measured?

The key factors above mention measuring client satisfaction. How can that be done?

Client satisfaction can be measured through a combination of quantitative metrics, client feedback tools, and behavioural indicators. Together, they reveal how well a law firm’s services are meeting client expectations.

One easy way is to ask clients to rate their experience using numbered scales (like 1-5 or 1-10). They can rate their overall satisfaction or specific things like how well you communicated or how quickly you responded. You then calculate the average of all these ratings to get a Customer Satisfaction (CSAT) score.

Another widely adopted metric for measuring client loyalty and satisfaction is the Net Promoter Score (NPS). It asks clients one central question: “How likely are you to recommend this law firm to a friend or colleague?” Clients respond on a scale from 0 to 10 and based on their answers they are categorised as promoters (9-10), passives (7-8), or detractors (0-6). The NPS is then calculated by subtracting the percentage of detractors from the percentage of promoters, yielding a score that can range from −100 (all detractors) to +100 (all promoters). Higher NPS scores indicate stronger client satisfaction and loyalty. Tracking changes in NPS over time helps firms identify a) trends in client experience and b) areas needing improvement.

But numbers alone don’t tell the whole story. You can also ask clients open-ended questions in surveys or interviews to understand *why* they feel a certain way. Additionally, you can look at their actual behaviour, like whether they stay with you, refer others, or hire you again. This shows their true satisfaction level beyond what they say in ratings.

How to improve client satisfaction?

Finally, some suggestions to improve client satisfaction:

Set Realistic Expectations: be upfront about timelines, potential delays, and realistic outcomes from the start.

Use Client Portals: use secure platforms for sharing documents, updates, and communication.

Seek Feedback: regularly ask clients for input via surveys to identify and fix issues quickly.

Be Proactive: anticipate questions and provide updates before clients have to ask.

Simplify Language: avoid jargon, so clients understand their situation and options.

 

Sources:

Legal Technology Predictions for 2026

Sticking to the tradition of new year predictions, here is a selection of legal technology predictions for 2026. By now, AI has become ubiquitous. So, it shouldn’t come as a surprise that most authors spend plenty of time on AI-related predictions. We also discuss market-related predictions, as well as at predictions about the law firm and legal practice. As can be expected, these three categories overlap.

AI-related predictions

The rise of AI in the law firm seems unstoppable at present. All authors in the sources below give their own AI-related predictions. These are the common themes.

AI becomes the defining force in legal technology

AI is going to reshape legal technology in a big way. It is becoming a fundamental part of how law firms work. AI doesn’t just write things for you. It manages entire tasks from start to finish. It can sort through cases, pull together documents, and help you make decisions. Most experts mention the same trend: AI tools that can think through problems, make plans, and handle complex legal work on their own. Basically, we’re shifting from AI as an assistant to AI as a genuine work partner. AI is fundamentally changing how lawyers do their jobs. It assists in everything from case management to predicting outcomes to organizing workflows.

AI is also changing how clients and lawyers interact from the very first conversation. More and more clients are showing up with advice they’ve already gotten from an AI tool. As a result, law firms need their own AI systems to sort through and make sense of what clients bring to the table before a human lawyer even gets involved. The firms that have already woven AI into their daily operations are going to cash in on that head start. The ones still dragging their feet risk getting left behind.

The shift from Generative AI to Agentic AI

As mentioned above, the biggest tech shift coming in 2026 is how AI works. We’re moving away from the old approach where you ask AI a question and it gives you an answer. Instead, we’re getting AI agents that can handle entire projects from beginning to end. Right now, you might need to constantly tell AI what to do next just to draft a single document. But in 2026, these AI agents will work more like digital colleagues. They’ll sort through new cases on their own, dig up relevant documents, fact-check information, and put together organized reports. And they do all of this without you needing to hold their hand through every step. This kind of “deep research” ability means AI is graduating from being a glorified word processor to something that can orchestrate complicated legal work. As a result, lawyers can oversee entire workflows without getting bogged down in every little detail.

Data-driven strategy and predictive analytics

Law firms are starting to realize that all the data they’ve been collecting – billing records, case results, which clients are actually profitable – is worth its weight in gold. In 2026, using predictive analytics isn’t cutting-edge anymore; it’s just how things are done. Firms will tap into these insights to predict how judges might rule, get ahead of opposing counsel’s next move, and figure out which types of cases actually make them money.

Market-related legal technology predictions

Several authors pay attention to changes in the legal services market in 2026.

Market consolidation and shakeouts

The legal AI market is heading for major consolidation in 2026, with more than half of current providers expected to be acquired or shut down. Companies offering basic AI features without distinctive value will struggle, while survivors will be those providing specialized capabilities and measurable results. Major vendors are moving from standalone products to integrated platforms that connect across the legal ecosystem.

Meanwhile, traditional Big Law firms face growing competition from alternative legal service providers and AI-powered firms that can handle standardized work like contracts and compliance at lower costs. Some AI vendors may even begin offering legal services directly, blurring the line between technology and law firm. This efficiency revolution is forcing the entire industry to reimagine how legal services are structured and priced.

Evolution of Revenue Models and Billing

AI and automation are making legal work so much faster that the old billable hour model is starting to fall apart. By 2026, we’re likely to see more firms experimenting with alternative fee arrangements that involve mixed pricing approaches: maybe hourly rates for some work, flat fees for other projects, and even subscription-style arrangements based on results. Why? Well, when AI can knock out routine tasks in a fraction of the time, firms need to shift their focus to charging for the high-level thinking and judgment that only experienced lawyers can provide. But this creates a new challenge: firms will need much better financial tracking for each case to make sure they’re still profitable, while also giving clients the transparency and fair pricing they’re increasingly demanding.

Cybersecurity as a Competitive Differentiator

As law firms rely more heavily on AI tools and cloud-based systems, cybersecurity is something clients actively care about and ask for. By 2026, clients will expect “zero-trust” security setups and automatic encryption as basic requirements. They’ll choose firms that can prove they have strong vendor security and handle data safely. Firms with rock-solid security systems are already using that as a major selling point to win over high-value clients, and that trend is only going to accelerate.

Predictions about the law firm and legal practice

The remaining predictions largely have to do with the law firm and legal practice. They can be grouped in four categories.

The Shift to Unified Legal Platforms

The legal industry is moving away from using multiple disconnected tools toward unified cloud platforms that act as a single system of record. Instead of juggling separate software for billing, document management, HR, and client communication, law firms and in-house legal departments are adopting integrated environments that bring everything together in one place.

These unified platforms combine matter management, document automation, spend analytics, and risk tracking into a comprehensive solution. This consolidation creates a “single source of truth” that gives legal teams better visibility into case status, resource allocation, and spending patterns while eliminating the inefficiencies of managing siloed systems.

This shift is especially beneficial for small and mid-sized firms. Cloud-native platforms make enterprise-level automation and security accessible to firms of all sizes. As legal workflows become more automated and data-driven, these integrated platforms are democratizing access to sophisticated technology across the entire legal industry.

The Rise of the AI-Augmented Legal Professional

The role of the lawyer is evolving to require new technical skills. In 2026, successful lawyers will need to master AI tools and data analysis, not just traditional legal research. This means learning how to effectively prompt AI systems and oversee their outputs. This shift is changing how junior lawyers learn. Rather than spending years on repetitive tasks like document review, they’ll develop skills through AI-powered mentorship and practice simulations. The goal isn’t to replace lawyers with technology, but to free them from routine work so they can focus on what humans do best: strategic thinking, ethical judgment, and building client relationships. Technology handles the repetitive tasks while lawyers concentrate on the work that truly requires human expertise.

Strategic Imperatives: Adoption, ROI, and Skills

The key to success in 2026 isn’t just buying AI tools, it’s using them strategically. Law firms need to carefully select secure technology partners, redesign their workflows around AI, and establish clear metrics to track efficiency gains and client outcomes. Firms that started experimenting with AI in 2024-25 will pull ahead of their competitors by deploying these tools at scale, delivering faster service at lower costs. We’ve reached a point where those who waited will struggle to catch up.

Client Experience and Business Model Innovation

AI is transforming how law firms interact with clients and charge for their services. In 2026, firms are expected to use AI not just to work faster internally, but to provide clients with better transparency and responsiveness. They use tools like real-time dashboards and collaborative portals. And, as mentioned above, as AI handles routine tasks, traditional hourly billing will increasingly give way to fixed fees and subscription models. This shift responds to client demands for predictable, value-based pricing rather than simply paying for the time lawyers spend on a matter.

Conclusion

In 2026, legal technology will be characterized by mature AI systems, fewer but more powerful platforms, and fundamental changes in how law firms operate. Firms that adopt integrated cloud platforms, predictive tools, and advanced AI will outpace their competitors, while those that resist change risk becoming irrelevant. Every technology decision will need to prioritize security and client trust, and lawyers will increasingly need both legal expertise and tech skills. The legal industry has moved past the experimentation phase. Technology is now reshaping the profession’s core structure, transforming from a helpful tool into an essential strategic partner.

 

Sources:

Liability for AI errors

Who is responsible when AI makes errors? In the last year, we have seen several cases where AI companies were taken to court. Parents of someone who committed suicide sued for negligence. There also are defamation cases because generative AI generated a factually incorrect response. In one example, a radio host sued OpenAI because ChatGPT produced a summary falsely claiming he had embezzled funds. There also have been cases involving product liability and contractual liability, among others. So, in this article, we will explore several scenarios where liability for AI errors came into play. We look at the different types of liability and at how to mitigate the liability for AI errors.

Please note that this article is not meant to provide legal advice. It is merely a theoretical exploration.

Criminal vs civil liability

Liability for AI hallucinations is both a complex and a rapidly evolving legal area, with plenty of voids and grey areas. Cliffe Dekker Hofmyer rightfully refers to it as a legal minefield.

There have been cases based on civil and on criminal liability. The situation at present is that most AI-related liability falls under civil law. This is because the claims concern compensation for harm, violation of private rights, or disputes between private parties. In many cases, the courts ruled that people are warned about hallucinations and that they use AI at their own risk. But other cases have shown that companies can be held liable for chatbot errors, and that legal professionals can face sanctions for relying on AI-generated but fictitious information.

Criminal liability connected to generative AI is currently rare because AI models lack intent. But there are scenarios where criminal law can be triggered. (See below).

Let us have a look at different types of liability.

Types of liability for AI errors

Defamation and Reputation Harm

A first series of cases involves defamation and reputation harm. Chatbots can generate false statements about individuals or organisations, sometimes with great specificity and apparent authority. When these falsehoods cause reputational damage, defamation law becomes relevant.

Early cases such as Walters v. OpenAI – the radio host mentioned above – illustrate how courts are beginning to test whether AI developers can be held responsible for hallucinated statements that damage someone’s reputation. In this case, the court ruled in favour of OpenAI. The court argued that Walters couldn’t prove negligence or actual malice, and that OpenAI’s explicit warnings about hallucinations weighed against liability. Thus far, defamation cases have largely been dismissed on those grounds.

Negligence and Duty of Care

Some lawsuits allege that AI systems failed to exercise reasonable care in situations where foreseeable harm was possible. Think of incidents of as self-harm or of the AI giving dangerous instructions.

Cases like Raine v. OpenAI and suits against Character.ai argue that developers owed a duty to implement safeguards, detect crises, or issue proper warnings. The argument is that failure to do so contributed to severe harm or even death. These cases are presently (December 2025) ongoing, and the courts have not ruled yet.

Wrongful Death and Serious Psychological Harm

In several allegations, chatbots induced, worsened, or failed to de-escalate suicidal ideation. Thus far, all cases that were taken to court have been in the US. Families of victims argue that the systems were designed in ways that made such harm foreseeable.

This category overlaps with negligence but remains distinct. Wrongful-death statutes in the US create their own remedies and set higher standards for three key elements: proximate causation, foreseeability, and the duty to protect vulnerable users.

Misrepresentation, Bad Advice, and Professional Liability

Although a chatbot is not itself a licensed professional, users often treat it as one. When a model produces incorrect legal, medical, financial, or technical advice that leads to material harm, plaintiffs may frame the issue as negligent misrepresentation or unlicensed practice through automation.

In the Mata v. Avianca sanctions case, for example, lawyers relied on non-existent precedents that were fabricated by ChatGPT. The lawyers were given a fine. This case demonstrates how professional users may be held responsible.

The case also raises questions about whether the model provider shares liability. Thus far, they have escaped liability on the same grounds as mentioned before, i.e., that the user is explicitly warned that the AI may provide them with incorrect information.

Product Liability and Defective Design

Some lawsuits frame chatbots as consumer products with design defects, inadequate safety systems, or insufficient warnings. Under this theory, the output is seen not merely as “speech” but as behaviour of a product that must meet baseline safety expectations. Claims of failure to implement guardrails, insufficient content filtering, or design choices that make harmful outcomes foreseeable fall under this category.

Contractual Liability and Terms-of-Service Breaches

AI systems are governed by contractual agreements between the user and the provider. AI developers may face contract liability if they fail to deliver promised functionality, violate their own service terms, or misrepresent their product’s capabilities. However, companies often use contractual clauses to protect themselves. These protective clauses limit liability, require arbitration, or disclaim responsibility for AI outputs. These clauses become contentious when actual harm occurs.

Copyright Infringement

Quite a few court cases involve copyright infringement where authors / creators claim that training generative AI with their works without their permission constitutes a copyright infringement. There also is a chance that the AI will use parts of their works in its responses, or that responses will be generated using several different source materials that are copyright protected. So, yes, generative AI raises serious copyright concerns, both in training and in output generation.

Thus far, we have witnessed litigation by authors, visual artists, and music publishers. In some places, copyright law has special rules that can hold AI companies responsible even if they didn’t directly copy someone’s work. These are called “contributory” and “vicarious” liability – meaning you can be liable for helping someone else infringe copyright, or for benefiting from infringement that happens under your control.

Because copyright law allows courts to award set amounts of money as damages (without needing to prove actual financial harm), this is one of the biggest financial risks AI companies face.

The AI companies on the other hand claim that training an AI falls under the “fair use” doctrine.

Privacy, Data-Protection, and Intrusion Violations

Many lawsuits claim that AI systems collect, keep, use, or expose people’s personal information without their explicit permission. These cases involve breaking data privacy laws (like Europe’s GDPR), invading people’s privacy, or misusing sensitive information. For example, a lawsuit called Cousart v. OpenAI shows how companies can be sued simply for how they handle data during training – not just for what the AI says or does afterward.

Emotional, Cognitive, and Psychological Harm

New studies show that chatbots can change how people remember things, alter their beliefs, or cause them to become emotionally dependent. Some lawsuits claim that AI chatbots harm users through these psychological effects. Plaintiffs argue that companies either intentionally designed them this way or were careless in creating systems that make people dependent, reinforce false beliefs, or worsen existing mental health problems. We’ll likely see more of these cases as we learn more about how regular AI use affects people’s minds.

Regulatory and Compliance Liability

As governments create new laws specifically for AI, companies can get in trouble for not following rules about being transparent, allowing audits, and managing risks properly. This includes laws like the EU AI Act, the Digital Services Act, and special rules for industries like healthcare or finance. Regulators can impose fines, ban certain activities, or restrict how companies operate – even without anyone filing a lawsuit.

Emerging and Hybrid Theories

Because AI doesn’t fit neatly into existing legal categories, courts and legal experts are creating new mixed approaches to determine who’s responsible when something goes wrong. These include treating AI as if it’s acting on behalf of the company, applying free speech laws to AI-generated content, or creating entirely new legal responsibilities for how algorithms influence people. As judges handle more AI cases, these hybrid approaches may eventually become their own distinct areas of law.

How to mitigate liability for AI errors

The following four suggestions can help mitigate the risks of liability.

Implement human oversight: critical decisions should not be made solely by AI without human review.

Provide training for the users: train employees on the limitations of AI tools and the importance of verifying information.

Use technical safeguards: limit an AI’s access to sensitive data and implement technical solutions to check the accuracy of its outputs.

Conduct risk assessments: before deployment, assess the potential harms of AI use and develop governance and response procedures.

 

Sources:

Retrieval Augmented Generation

In previous articles, we talked about generative AI, its benefits, and the risks that it comes with. One such risks is the fact that generative AI can hallucinate. It also doesn’t have access to the information you keep professionally. Retrieval augmented generation (RAG) addresses both issues. In this article, we answer the following questions: What is retrieval augmented generation? What are the benefits? And how can you use retrieval augmented generation with Copilot & SharePoint?

What is retrieval augmented generation?

Wikipedia defines retrieval augmented generation (or RAG) as “a technique that enables large language models (LLMs) to retrieve and incorporate new information. With RAG, LLMs do not respond to user queries until they refer to a specified set of documents. These documents supplement information from the LLM’s pre-existing training data. This allows LLMs to use domain-specific and/or updated information that is not available in the training data. For example, this helps LLM-based chatbots access internal company data or generate responses based on authoritative sources. RAG improves large language models (LLMs) by incorporating information retrieval before generating responses.”

In other words, RAG enhances large language models by connecting them to external knowledge sources. Instead of relying solely on the information the model learned during training, RAG first retrieves relevant documents or data from a database, or your knowledge base. It then uses that retrieved information to generate more accurate and up-to-date responses.

The basic idea is simple: when you ask a question, the system searches through a collection of documents (like company files, research papers, or websites) to find relevant information. Then it feeds both your question and those retrieved documents to the language model. The model uses this context to produce an answer that’s grounded in your specific data rather than just its own general training knowledge. So, those are the three steps of retrieval augmented generation:

  • Retrieval: When a user asks a question, the RAG system searches an external knowledge base (like a company’s specific documents) for relevant information.
  • Augmentation: The retrieved information is then added to the original prompt, creating an “augmented” request.
  • Generation: The large language model (LLM) then generates a response based on this augmented prompt, using the external data to provide a more specific and accurate answer.

This approach solves several common problems with standard LLMs. It reduces hallucinations because the model a) bases its answers on actual retrieved text, b) allows the system to access current information beyond the model’s training cutoff date, and c) lets you use domain-specific knowledge without having to retrain the entire model. RAG is particularly useful for applications like customer support systems that need company-specific information. It is also useful for research assistants that work with scientific literature, or in any scenario where you need accurate answers based on a particular knowledge base.

Now, when you start researching retrieval augmented generation, you will often encounter the terms pipes or pipelines. It refers to the processing steps that transform a user’s query into a final response. They’re essentially the workflow or data flow that connects different components of the RAG system. The “pipe” metaphor comes from Unix pipes, where data flows from one process to another.

Different RAG implementations can have varying pipeline architectures. Some are simple with just query, retrieve, and generate stages. Others are complex with multiple retrieval steps, feedback loops, or parallel processing paths.

What are the benefits?

RAG offers several benefits that make it attractive for real-world applications.

The fact that it offers access to current and specific information is perhaps the most obvious advantage. Since the model retrieves information from your own database or documents, it can work with data that’s a) more recent than its training cutoff or b) with highly specialized knowledge that wasn’t in its original training data. This means companies can get accurate answers about their latest policies, recent research papers, or proprietary information. Depending on how you set it up, for law firms it can have access to your legal documentation, your knowledge base, your case files and/or documents.

As mentioned in the introduction, reduced hallucinations are another major benefit. When language models generate answers purely from their training, they sometimes confidently state incorrect information. RAG grounds the model’s responses in actual retrieved documents. This makes it cite or base its answers on real sources rather than just making things up. The result is that its output is more reliable and trustworthy.

Another significant is cost-effectiveness. With RAG you don’t need to fine-tune or retrain large language models every time your information changes. Instead, you simply update your document database, and the RAG system will retrieve the new information. This is far cheaper and faster than retraining models. After all, that requires substantial computational resources and technical expertise.

RAG also addresses the issues of transparency and traceability because you can see which documents the system retrieved to answer a question. This makes it easier to verify answers, debug problems, and build trust with users who can check the sources themselves.

A final benefit is referred to as domain adaptability. It means that you can quickly deploy the same base model across different domains or use cases by simply swapping out the document collection it retrieves from. One model can serve medical applications, legal research, or customer support just by changing the underlying knowledge base.

Retrieval augmented generation with Copilot & SharePoint

Interesting for law firms who use Copilot and SharePoint is that Copilot can be used in combination with SharePoint to enable RAG responses. Microsoft has made this integration quite powerful.

How does it work? Microsoft 365 Copilot offers a retrieval API that allows developers to ground generative AI responses in organizational data stored in SharePoint, OneDrive, and Copilot connectors. This means you can build custom AI solutions that retrieve relevant text snippets from SharePoint without needing to replicate or re-index the data elsewhere. The API understands user context and intent, performs query transformations, and returns highly relevant results from your Microsoft 365 content.

This approach offers several advantages for RAG implementations. You don’t need to set up separate vector databases: You can skip the traditional RAG setup that involves embedding, chunking, and indexing documents. The API automatically respects existing access controls and governance policies. This ensures security and compliance. Additionally, you can combine SharePoint data with other Microsoft 365 and third-party sources to create richer, more comprehensive responses.

For personal experimentation

If you would like to first experiment on your own, you can try Google’s new Notebook LM, which implements RAG technology. It’s an AI-powered research and writing assistant that helps users summarize and understand information from uploaded sources or specific websites.

Sources:

 

Legal chatbots in 2025

We have in the past dedicated articles to legal chatbots in 2016 and 2019. It is time for an update. In this article, we discuss trends and adoption of legal chatbots, as well as existing regulation. Then we look at legal chatbots for consumers and legal chatbots for law firms. We will do so for the US (because they are still the market leaders), the UK, and the EU.

Trends and adoption

The US has seen a rapid growth of bots / AI agents in law firms: AI adoption in US law firms surged from 19% in 2023 to 79% in 2024, with chatbots playing a central role. This market expansion is still ongoing: the US legal tech market is projected to reach $32.54 billion by 2026, with chatbots as one of the main drivers.

In the UK, adoption is most advanced in large, business-to-business (B2B) law firms. Chatbots are integrated with legal analytics, project management, and contract management systems. In contrast, we find that the B2C market lags: the business-to-consumer market is slower to adopt. Legal chatbots are most popular in firms with large-scale, commoditized services. Chatbot and AI adoption is slowed down by a lack of awareness and uncertainty about the role of AI: over one-third of UK legal professionals remain uncertain about the application of generative AI and chatbots in legal work.

In the EU on the other hand, we are witnessing an increasing adoption. There is a steady rise in chatbot use for routine legal tasks, especially among consumers and SMEs. Chatbots are also seen as tools to improve access to justice, particularly for underserved populations and cross-border matters. At the same time, there are ongoing ethical and legal debates, as there are concerns about accuracy, liability, and bias in AI-generated legal advice.

Regulation

In recent years, there has been a move towards regulating the use of AI, which also affects the use of legal chatbots.  There is a need for transnational regulation, but thus far, each region just does its own thing.

In the US, we are confronted with fragmented regulation. The US lacks a comprehensive federal AI law. As a result, regulation is piecemeal: we are dealing with a) state-level initiatives and b) professional (ethical) conduct rules that guide how lawyers can use AI. When it comes to legal chatbots specifically, there is a requirement for professional oversight. In other words, chatbots cannot independently practice law, and human supervision is required to avoid unauthorized practice and to ensure accuracy. And of course, law firms must consider privacy and security when using legal bots. Compliance with privacy laws is essential, especially when handling sensitive client data.

It is worth noting that the FTC (Federal Trade Commission) has made clear that bots cannot market themselves as “robot lawyers” or a substitute for licensed counsel without substantiation. Its 2024 enforcement against DoNotPay (a consumer rights bot we discussed in previous articles) resulted in a $193,000 penalty and strict advertising restrictions. This FTC ruling is widely cited as the line in the sand for consumer legal AI claims.

Furthermore, the American Bar Association’s first formal opinion on generative AI (Formal Opinion 512, 2024) says lawyers must a) understand the capabilities and limits of AI, b) protect confidentiality, c) supervise outputs, and d) be candid with courts and clients. They do not need to be “AI experts,” but they can’t delegate professional judgment to a bot. Several bar associations and courts have issued similar guidance.

The UK relies on flexible, sector-specific laws and regulation, with a focus on transparency, explainability, and data protection (UK GDPR). Add to that, that legal professionals must ensure chatbots comply with professional ethical standards, including confidentiality and competence.

In the EU, we find regulation on both the EU level, as well as on the national level. On the EU level, the GDPR and the EU AI Act are the most important regulations. The GDPR has strict data privacy requirements which also apply to chatbot operations, especially with sensitive legal data. The EU AI Act introduces risk-based regulation, with high-risk applications (like legal advice) facing stricter requirements for transparency, accuracy, and human oversight.

Apart from the EU regulations, we also find that some National Bar Associations have issued their own regulations. As a result, in some countries only licensed lawyers can provide legal advice. This effectively limits the chatbot scope and/or requires professional supervision.

Legal chatbots for consumers

In previous articles on legal chatbots, we mainly discussed legal chatbots for consumers. What they all have in common is that they facilitate access to legal information. They democratize legal knowledge, making it more accessible to the public. (Links in the introduction). Overall, there still is a steady rise in chatbot use for routine legal tasks, especially among consumers and SMEs.

Legal chatbots for law firms

Apart from chatbots for consumers, in recent years we have also witnessed an increase in the number of legal chatbots for law firms. What are they used for?

  • Automation of routine tasks: chatbots automate legal research, contract review, and administrative work.
  • Document automation: bots are assisting lawyers with the creation and review of standard legal documents.
  • Legal research: AI chatbots can scan and summarize large volumes of legal documents and precedents rapidly.
  • Client engagement and intake: they are also used to handle initial queries, provide information, and schedule appointments, and they can direct clients to appropriate services or professionals.
  • Provide a better consumer experience: some law firms use their own legal chatbots to offer consumer services. By doing so, they enhance accessibility in areas like small claims, tenancy issues, and basic legal advice.

Conclusion

Legal chatbots have become an essential part of legal services in the US, UK, and Europe. Big law firms and routine legal services have been the quickest to adopt these technologies, but now we’re seeing more tools that help everyday people access legal help.

Regulatory frameworks are evolving rapidly, with the EU leading in comprehensive risk-based regulation, the UK favouring sector-specific guidance, and the US maintaining a fragmented, state-driven approach. Across all regions, the focus is on balancing innovation with ethical, professional, and data privacy safeguards.

At present, the US is still leading the way when it comes to legal chatbots. Most research/drafting bots originate in the U.S. (Thomson Reuters, Lexis, Harvey, Bloomberg). The UK on the other hand, is presenting itself as a contract-review hub: tools like Luminance and Robin AI grew out of the U.K.’s startup ecosystem. Continental European firms use a mix of U.S./U.K. platforms under GDPR controls, but also homegrown tools like ClauseBase and Legito for contract/document automation.

 

Sources:

 

Legal issues with stablecoins

In the previous article, we talked about what stablecoins are, why they matter, and what different types of stablecoins there are. In this follow-up article, we look at the main legal issues. There are qualification issues with stablecoins. There are new regulatory frameworks. We also discuss some other risks and legal issues with stablecoins.

Qualification issues with stablecoins

The legal qualification of stablecoins remains one of the most debated issues, as they do not fit neatly into existing legal categories. The core challenge lies in determining whether stablecoins should be treated as money, securities, commodities, or something else entirely. This classification has significant implications for which regulators have jurisdiction, and which legal rules apply. In many jurisdictions, a key issue is whether a stablecoin qualifies as a security.

In the European Union, the Markets in Crypto-Assets Regulation (MiCA) resolves this ambiguity to a large extent by creating new categories specifically for stablecoins: “e-money tokens” and “asset-referenced tokens”. E-money tokens are those that are pegged to a single currency and resemble traditional electronic money under the E-Money Directive. Asset-referenced tokens are broader and can include tokens backed by baskets of currencies or commodities. This approach avoids trying to fit stablecoins into outdated categories like securities or commodities and instead regulates them on their own terms.

In the UK, the Financial Conduct Authority (FCA) does not generally treat fiat-backed stablecoins as securities unless they exhibit investment characteristics. However, the upcoming regulatory framework under the Financial Services and Markets Act 2023 will grant the Bank of England and FCA more tools to supervise stablecoins used for payments. At present, August 2025, they have not published any regulations yet.

In the United States, the Securities and Exchange Commission (SEC) has suggested that certain stablecoins, particularly those offering interest-bearing features or tied to investment mechanisms, may fall under the definition of securities. However, fiat-backed payment stablecoins like USDC or USDP, which simply maintain a 1:1 peg to a currency and do not generate returns for holders, are more often considered outside the scope of securities regulation. At the same time, the Commodity Futures Trading Commission (CFTC) has taken the position that some stablecoins may qualify as commodities. In a 2023 enforcement action, the CFTC referred to tethered assets like USDT as commodities under the Commodity Exchange Act. This has added to the regulatory uncertainty in the U.S., where overlapping authorities and inconsistent classifications have left issuers and users in legal limbo.

In essence, the legal qualification of stablecoins hinges on their structure and function. If they are used for payments and are fully backed by fiat currency reserves, they are more likely to be treated as payment instruments or e-money. If they are algorithmic, generate returns, or have speculative components, they may fall under securities or commodities laws. Regulatory frameworks are required to resolve the ambiguity and uncertainty stablecoin issuers face. Which brings us to …

Regulatory Frameworks

These days, the regulation of stablecoins is rapidly evolving. Regulatory initiatives focus on concerns about consumer protection, financial stability, and the risks of unregulated digital assets. Both the European Union and the United States have recently introduced or implemented significant legislative frameworks to address these concerns.

As mentioned above, in the European Union, stablecoins fall under the Markets in Crypto-Assets Regulation (MiCA). MiCA was formally adopted in 2023 and began phasing in from June 2024. MiCA distinguishes between different types of crypto assets. It introduces specific provisions for “e-money tokens” (which are pegged to a single fiat currency) and “asset-referenced tokens” (which may be backed by a basket of assets or commodities). Issuers of these stablecoins are required to obtain authorization from national competent authorities and must meet stringent governance, capital, and reserve requirements. MiCA also imposes obligations on crypto-asset service providers, ensuring oversight of issuance, custody, and trading. The European Central Bank has highlighted the importance of this framework to prevent the fragmentation of the digital finance market and to protect consumers.

In the United States, after years of regulatory ambiguity, Congress has recently made progress toward a unified approach. In July 2024, the Clarity for Payment Stablecoins Act was passed by the House Financial Services Committee and gained bipartisan traction. This bill focuses specifically on payment stablecoins, such as those issued by Circle (USDC) and Paxos (USDP), and introduces a clear licensing regime. Under this legislation, stablecoin issuers must either be state-licensed nonbank entities or federally approved institutions regulated by the Federal Reserve. The bill also imposes strict reserve backing requirements, limits on rehypothecation of reserve assets, and detailed disclosure obligations to increase transparency. In July 2025, the Genius Act – the first federal regulatory framework for stablecoins – was passed in Congress. It creates a new licensing regime for payment stablecoin issuers and is the first major crypto-related legislation to be passed by both chambers of Congress. The bill was signed into law on 18 July 2025.

Regulators in both areas understood that stablecoins might have a big impact once they become widely used. In the EU, MiCA includes special oversight mechanisms for “significant” stablecoins, allowing the European Banking Authority to step in. Similarly, in the U.S., the President’s Working Group on Financial Markets believes the federal government needs to regulate companies that issue stablecoins, especially the big ones that process lots of payments.

Outside the EU and U.S., countries like Japan and the UK are also catching up. Japan already passed a law in 2022 that allows only licensed banks and trust companies to issue stablecoins, while the UK’s Financial Services and Markets Act 2023 granted the Bank of England new powers to oversee systemic digital settlement assets, including fiat-backed stablecoins.

Other risks and legal issues with Stablecoins

Apart from classification and regulatory frameworks, stablecoins raise several other legal issues and risks. These have to do with financial stability, consumer protection, monetary sovereignty, and data governance. These concerns are particularly significant given the potential for stablecoins to scale rapidly across borders and integrate with mainstream financial services.

A first issue is the operational risk, especially the risk of technical failure, cyberattacks, or fraud within the stablecoin infrastructure. Since most stablecoins rely on centralized issuers or custodians, the reliability of reserve management and smart contracts is critical. A failure in these systems could cause a loss of peg, mass redemptions, or loss of user funds. In the previous article we mentioned the TerraUSD’s collapse in 2022, which was algorithmic stablecoin. Its collapse exposed how vulnerabilities in design can destabilize not only a single token but also the broader market. The US Financial Stability Board (FSB) has emphasized the importance of robust governance and risk management frameworks to prevent such collapses. Its October 2023 report outlines these concerns in detail.

Another legal concern is redemption rights. Users need clear, enforceable rights to redeem stablecoins for fiat currency on demand. In practice, many stablecoin issuers include disclaimers or reserve the right to delay or deny redemptions under certain conditions. This raises questions about contractual enforceability and consumer protection, particularly in jurisdictions without clear legal protections for token holders. The IMF has raised similar concerns in its global policy papers, especially when stablecoins operate across borders where legal remedies may be unclear or unenforceable.

There are also anti-money laundering (AML) and counter-terrorist financing (CTF) concerns. Stablecoins offer a relatively stable value and fast, borderless transfers, which make them attractive for illicit use. Many stablecoin platforms operate with limited KYC (Know Your Customers) procedures or allow anonymous transfers via decentralized protocols. Regulators have warned that this can undermine AML frameworks and create enforcement gaps.

Another major legal issue is monetary sovereignty. Central banks have raised concerns that widespread use of privately issued stablecoins could erode control over national currencies and monetary policy, especially in developing countries. If a stablecoin pegged to the US dollar becomes a dominant means of payment in another country, it can cause de facto dollarization and limit a central bank’s ability to manage inflation or respond to economic shocks.

Finally, data privacy and surveillance pose emerging legal and ethical challenges. Stablecoin providers often collect and process sensitive personal and financial data. In jurisdictions like the EU, such processing is subject to the General Data Protection Regulation (GDPR). But questions remain about how decentralized systems can comply with data minimization, user consent, and the right to erasure. Moreover, law enforcement access to stablecoin transaction data creates a tension between privacy rights and regulatory compliance.

Together, these issues show that the legal issues regarding stablecoins involves much more than just classification or licensing. Since stablecoins touch on financial law, contracts, data protection, monetary policy, and consumer rights, both companies and users face significant legal risks until we get better, more coordinated regulations worldwide.

 

Sources:

An introduction to stablecoins for lawyers

Stablecoins are in the news on a daily basis. Major banks are considering releasing their own stablecoins. Governments have started regulating them. On 18 July 2025, the US government, e.g., signed a law to create a regulatory regime for dollar-pegged stablecoins. So, this is the first of two articles on stablecoins. In this article, we answer the following questions: What are stablecoins? Why do they matter? What are the different types of stablecoins? And how do they relate to other cryptocurrencies? In the next article, we will then focus on the legal aspects of stablecoins.

What are stablecoins?

Wikipedia describes a stablecoin as “… a type of cryptocurrency where the value of the digital asset is supposed to be pegged to a reference asset, which is either fiat money, exchange-traded commodities (such as precious metals or industrial metals), or another cryptocurrency.”

So, what are we talking about? Stablecoins are a category of digital assets that are specifically designed to maintain a stable value. They achieve this by being pegged to a reference asset, typically a fiat currency like the US dollar or euro. Traditional cryptocurrencies such as Bitcoin or Ethereum are known for their price volatility. Stablecoins on the other hand aim to offer the benefits of blockchain technology, such as fast, borderless, and programmable transactions, but without the associated price fluctuations.

Why do they matter?

Stablecoins have become the backbone of many digital asset transactions. They offer the benefits of cryptocurrency, like speed, low cost, and global reach, while avoiding its biggest flaw: volatility. With stable value, they make it possible to trade, lend, borrow, and transfer money on blockchain platforms without worrying about large price swings. This makes them attractive not just to crypto users, but also to businesses, fintech companies, and even central banks.

Stablecoins are already widely used on cryptocurrency exchanges as quote currencies (e.g., USDT or USDC pairs). They enable traders to move in and out of volatile assets without relying on fiat bank transfers. In cross-border payments, stablecoins enable near-instant remittances with lower fees compared to traditional money transfer services. This is especially the case in regions with unstable banking systems. Additionally, stablecoins are used for borrowing, lending, and staking. They allow users to earn interest or participate in decentralized governance while maintaining exposure to a relatively stable asset. They have also become critical tools for avoiding capital controls and hyperinflation in countries with unstable currencies.

Let’s put things in perspective: in 2024, over 70% of trading volume on major cryptocurrency exchanges was settled using stablecoins. Beyond trading, stablecoins are now used for cross-border payments, employee payroll, and remittances. Some stablecoins are accepted by merchants and payment processors, integrating them into the real economy. Stablecoins have even seen growing use in countries with unstable currencies, where they serve as a hedge against inflation. This trend is expected to grow, especially in emerging markets.

Notably, the total supply of stablecoins has grown significantly: as of mid-2025, the combined market capitalization of major stablecoins exceeds 150 billion USD.

What are the different types of stablecoins?

Wikipedia mentions 3 different types, but the literature mentioned in the sources below also discuss a fourth one. They can be grouped into distinct types based on how they maintain their price stability. Each of these approaches reflects a different mechanism for achieving price parity with a target asset.

Fiat backed

Fiat-collateralized stablecoins are backed 1:1 by reserves held in traditional financial institutions. These reserves can include bank deposits, short-term government securities, or other low-risk instruments. When a user purchases a stablecoin, the issuer stores an equivalent amount of fiat currency in reserve. Examples of this type include USD Coin (USDC), Tether US (USDT) and Tether EU (EURT). These coins are generally considered the most stable, though they rely heavily on the trustworthiness and transparency of the issuer. Concerns over reserve backing have occasionally led to regulatory scrutiny, particularly in Tether US’ case. For example, in 2021, the US Commodity Futures Trading Commission (CFTC) fined Tether $41 million for misleading statements about its reserves.

Cryptocurrency backed

Crypto-collateralized stablecoins are backed by other cryptocurrencies rather than fiat. These stablecoins are typically overcollateralized to account for the volatility of their underlying assets. A prominent example is DAI, which is issued by the MakerDAO protocol and backed by Ethereum and other assets deposited in smart contracts. Users lock up collateral that exceeds the value of the DAI issued, helping to maintain its dollar peg. This model removes the need for centralized custodians but introduces complexity and vulnerability during market downturns, as seen in March 2020 when rapid price drops led to liquidations within the Maker system.

Algorithmic stablecoins

Algorithmic stablecoins use smart contracts and market incentives to maintain a peg without relying on collateral. Instead of being backed by assets, these coins regulate supply algorithmically. When the price drops below the peg, the protocol may reduce supply by incentivizing users to burn coins. If the price rises above the peg, new coins are minted. This model is theoretically elegant but has proved highly unstable in practice. The most notorious failure in this category was TerraUSD (UST), which lost its peg in May 2022 and collapsed entirely, wiping out over $40 billion in market value. This event highlighted the systemic risks algorithmic stablecoins pose when trust and liquidity disappear.

Commodity backed

In addition to these main types, a niche category exists for commodity-backed stablecoins. These are pegged to physical goods such as gold or oil. Paxos Gold (PAXG), for example, is backed by physical gold stored in London vaults, allowing users to own fractionalized gold on the blockchain. These stablecoins combine the technological advantages of digital tokens with the perceived safety of tangible assets.

How they relate to other cryptocurrencies

In the broader ecosystem of cryptocurrencies, stablecoins play an essential role. Traditional cryptocurrencies like Bitcoin are often used as speculative assets or long-term stores of value (sometimes called “digital gold”). Stablecoins on the other hand serve as more practical tools for day-to-day transactions, on-chain liquidity, and as units of account within decentralized finance (DeFi) platforms. Their price stability makes them an ideal medium for settling trades, providing collateral, and earning yield in lending protocols. So, they function as a much needed bridge between volatile cryptocurrencies and traditional financial systems, as they allow users to store and transfer value on blockchain networks with a degree of price predictability.

 

Sources:

Using AI for legal research

How safe is using AI for legal research? On the one hand, AI is making quick progress and keeps getting better. The arrival of a new generation of AI agents will only speed up that process. But on the other hand, we keep getting headlines where law firms are being fined for using AI that referred to non-existing legislation and jurisprudence. In this article, we look at a) how AI is reshaping legal research, b) at the risks and accuracy concerns of using AI in legal research, c) at possible mitigation strategies. Finally, d) we look at using AI for legal research on non-US law.

How is AI reshaping legal research – benefits

AI has been having a significant impact on legal research, and generative AI has certainly sped up that process. Many law firms are using generative AI to assist them with their legal research. It is easy and convenient, as they can ask questions in natural language, rather than having to study some query language. And now that most generative AIs have started offering more advanced research agents that can provide sources, AI has become even more attractive. So, AI is significantly reshaping legal research in several impactful ways. Most of those are beneficial.

One of the most noticeable changes is the enhanced speed and efficiency it brings. AI tools are capable of sifting through vast volumes of legal data in seconds, identifying relevant information much faster than a human could. This efficiency saves lawyers considerable time and resources.

Beyond speed, AI can also improve the accuracy and depth of insight in legal research. By analysing large datasets, AI can detect patterns and extract insights that might go unnoticed by human researchers. It can also flag potential errors or inconsistencies in legal documents, helping to ensure the accuracy and reliability of the information used. But caution is needed, as we will discuss below.

Another major advantage is the broader access to legal information that AI provides. These tools can draw from a wide array of sources, including statutes, case law, legal journals, and specialized databases. This comprehensive reach allows lawyers to develop a fuller understanding of the legal issues they face.

Natural Language Processing (NLP) and machine learning further enhance AI’s capabilities in the legal field. NLP enables AI to comprehend the meaning within legal texts. This allows it to extract key information and identify relevant precedents. Meanwhile, machine learning algorithms can analyse historical case data to predict outcomes. This gives lawyers valuable insights into the strengths and weaknesses of their cases.

AI is also increasingly being integrated into established legal research platforms. This integration improves the efficiency and comprehensiveness of legal research.

However, as AI becomes more embedded in legal practice, responsible usage is essential. Ensuring accuracy, upholding ethical standards, and maintaining regulatory compliance are critical. Lawyers must treat AI as a supportive tool rather than a standalone solution, and it remains vital to verify any information generated by AI systems. Because there are still considerable risks involved in using AI for legal research.

Risks and accuracy concerns of using AI in legal research

In a recent case in California, a judge found that nine out of the twenty-seven quoted sources were non-existent. The two law firms involved (one had delegated research to the other) were fined 31 000 USD. If you follow the news on legal AI, it is a common problem. Apart from that, AI still often is biased, too. Let’s have a closer look at both issues.

Accuracy concerns

AI systems can produce inaccurate, incomplete, or misleading legal information. This is particularly the case when dealing with complex cases, with nuanced legal concepts, or when legislation or jurisprudence has changed recently.

Even worse are AI “Hallucinations”. As witnessed in the example above, AI can generate plausible but factually incorrect information. It is therefore crucial to verify all AI-generated output against credible sources. The Californian example above highlights how this is a serious risk, as one in three sources that were quoted did not exist.

The example also illustrates the risk of reliance on AI without oversight. You cannot assume the AI knows what it’s doing. Over-reliance on AI without thorough human review can lead to errors that compromise case outcomes and erode client trust.

Bias and ethical concerns

In previous articles, we pointed out that AI inherits and reflects all the biases of the data pool that it was trained upon. This can lead to unfair or discriminatory legal outcomes. So, bias in AI algorithms is a first concern.

Many AI systems cannot explain how they reached their conclusions, or they fail to mention sources. Lack of transparency and accountability, therefore, is a second issue. The algorithms used by AI systems can be opaque, making it difficult to understand how decisions are made and hold the AI system accountable.

Clients may not fully understand the role of AI in their legal representation. This can easily undermine their trust. Clear communication is essential.

As with any online tool lawyers use that share client information, there are data privacy and confidentiality concerns.

Finally, there is the aspect of professional responsibility. Lawyers have a duty to supervise AI-generated work, ensuring it is accurate and ethical. They also must communicate with clients about the use of AI tools.

Mitigation strategies

It is possible to counteract these risks by implementing some mitigation strategies.

  • Always verify AI-generated results against credible legal databases and primary sources.
  • Actively oversee and review AI-generated work to ensure accuracy, as well as ethical compliance.
  • Be transparent with clients about the use of AI tools.
  • Implement robust data security measures to protect client information and comply with privacy regulations.
  • Adhere to ethical guidelines and professional responsibilities when using AI in legal practice.

What about using AI for legal research on non-US law?

Most of the advances in generative AI are being made in the US, and the EU is catching up. How well do the generative AI platforms perform when it comes to non-US law? And are they available in other languages?

Let’s start with the language question: all the major generative AI engines are available in Dutch and French.

Then, what about non-US law? We did some test with European Law, more specifically about GDPR, and overall, these tests went well. We did not test on recent legislation or jurisprudence.

We also briefly did some tests with Belgian law. We thought art. 1382 of the Civil Code would be an interesting test case, given that it was recently replaced by a new book 6 on extra-contractual liability. We ran the test on ChatGPT, CoPilot, Gemini, Claude, Perplexity, Grok, and you.com. Only four out of seven pointed out that art. 1382 CC had been replaced. They were ChatGPT, CoPilot, Gemini and Grok. The other three, Claude, Perplexity, and You.Com, all did not mention book 6 on extracontractual liability at all.

So, while caution and supervision are already needed for US and EU law, it is even more the case for the law of EU member states, where several generative AI platforms were not (yet) aware of recent legislation.

Conclusion

Using AI for legal research holds promise, but supervision is still very much needed. The above examples show how they can still hallucinate, and that they may not be aware of recent changes in legislation or jurisprudence.

 

Sources: