Tag Archives: Regulation

Legal chatbots in 2025

We have in the past dedicated articles to legal chatbots in 2016 and 2019. It is time for an update. In this article, we discuss trends and adoption of legal chatbots, as well as existing regulation. Then we look at legal chatbots for consumers and legal chatbots for law firms. We will do so for the US (because they are still the market leaders), the UK, and the EU.

Trends and adoption

The US has seen a rapid growth of bots / AI agents in law firms: AI adoption in US law firms surged from 19% in 2023 to 79% in 2024, with chatbots playing a central role. This market expansion is still ongoing: the US legal tech market is projected to reach $32.54 billion by 2026, with chatbots as one of the main drivers.

In the UK, adoption is most advanced in large, business-to-business (B2B) law firms. Chatbots are integrated with legal analytics, project management, and contract management systems. In contrast, we find that the B2C market lags: the business-to-consumer market is slower to adopt. Legal chatbots are most popular in firms with large-scale, commoditized services. Chatbot and AI adoption is slowed down by a lack of awareness and uncertainty about the role of AI: over one-third of UK legal professionals remain uncertain about the application of generative AI and chatbots in legal work.

In the EU on the other hand, we are witnessing an increasing adoption. There is a steady rise in chatbot use for routine legal tasks, especially among consumers and SMEs. Chatbots are also seen as tools to improve access to justice, particularly for underserved populations and cross-border matters. At the same time, there are ongoing ethical and legal debates, as there are concerns about accuracy, liability, and bias in AI-generated legal advice.

Regulation

In recent years, there has been a move towards regulating the use of AI, which also affects the use of legal chatbots.  There is a need for transnational regulation, but thus far, each region just does its own thing.

In the US, we are confronted with fragmented regulation. The US lacks a comprehensive federal AI law. As a result, regulation is piecemeal: we are dealing with a) state-level initiatives and b) professional (ethical) conduct rules that guide how lawyers can use AI. When it comes to legal chatbots specifically, there is a requirement for professional oversight. In other words, chatbots cannot independently practice law, and human supervision is required to avoid unauthorized practice and to ensure accuracy. And of course, law firms must consider privacy and security when using legal bots. Compliance with privacy laws is essential, especially when handling sensitive client data.

It is worth noting that the FTC (Federal Trade Commission) has made clear that bots cannot market themselves as “robot lawyers” or a substitute for licensed counsel without substantiation. Its 2024 enforcement against DoNotPay (a consumer rights bot we discussed in previous articles) resulted in a $193,000 penalty and strict advertising restrictions. This FTC ruling is widely cited as the line in the sand for consumer legal AI claims.

Furthermore, the American Bar Association’s first formal opinion on generative AI (Formal Opinion 512, 2024) says lawyers must a) understand the capabilities and limits of AI, b) protect confidentiality, c) supervise outputs, and d) be candid with courts and clients. They do not need to be “AI experts,” but they can’t delegate professional judgment to a bot. Several bar associations and courts have issued similar guidance.

The UK relies on flexible, sector-specific laws and regulation, with a focus on transparency, explainability, and data protection (UK GDPR). Add to that, that legal professionals must ensure chatbots comply with professional ethical standards, including confidentiality and competence.

In the EU, we find regulation on both the EU level, as well as on the national level. On the EU level, the GDPR and the EU AI Act are the most important regulations. The GDPR has strict data privacy requirements which also apply to chatbot operations, especially with sensitive legal data. The EU AI Act introduces risk-based regulation, with high-risk applications (like legal advice) facing stricter requirements for transparency, accuracy, and human oversight.

Apart from the EU regulations, we also find that some National Bar Associations have issued their own regulations. As a result, in some countries only licensed lawyers can provide legal advice. This effectively limits the chatbot scope and/or requires professional supervision.

Legal chatbots for consumers

In previous articles on legal chatbots, we mainly discussed legal chatbots for consumers. What they all have in common is that they facilitate access to legal information. They democratize legal knowledge, making it more accessible to the public. (Links in the introduction). Overall, there still is a steady rise in chatbot use for routine legal tasks, especially among consumers and SMEs.

Legal chatbots for law firms

Apart from chatbots for consumers, in recent years we have also witnessed an increase in the number of legal chatbots for law firms. What are they used for?

  • Automation of routine tasks: chatbots automate legal research, contract review, and administrative work.
  • Document automation: bots are assisting lawyers with the creation and review of standard legal documents.
  • Legal research: AI chatbots can scan and summarize large volumes of legal documents and precedents rapidly.
  • Client engagement and intake: they are also used to handle initial queries, provide information, and schedule appointments, and they can direct clients to appropriate services or professionals.
  • Provide a better consumer experience: some law firms use their own legal chatbots to offer consumer services. By doing so, they enhance accessibility in areas like small claims, tenancy issues, and basic legal advice.

Conclusion

Legal chatbots have become an essential part of legal services in the US, UK, and Europe. Big law firms and routine legal services have been the quickest to adopt these technologies, but now we’re seeing more tools that help everyday people access legal help.

Regulatory frameworks are evolving rapidly, with the EU leading in comprehensive risk-based regulation, the UK favouring sector-specific guidance, and the US maintaining a fragmented, state-driven approach. Across all regions, the focus is on balancing innovation with ethical, professional, and data privacy safeguards.

At present, the US is still leading the way when it comes to legal chatbots. Most research/drafting bots originate in the U.S. (Thomson Reuters, Lexis, Harvey, Bloomberg). The UK on the other hand, is presenting itself as a contract-review hub: tools like Luminance and Robin AI grew out of the U.K.’s startup ecosystem. Continental European firms use a mix of U.S./U.K. platforms under GDPR controls, but also homegrown tools like ClauseBase and Legito for contract/document automation.

 

Sources:

 

Legal issues with stablecoins

In the previous article, we talked about what stablecoins are, why they matter, and what different types of stablecoins there are. In this follow-up article, we look at the main legal issues. There are qualification issues with stablecoins. There are new regulatory frameworks. We also discuss some other risks and legal issues with stablecoins.

Qualification issues with stablecoins

The legal qualification of stablecoins remains one of the most debated issues, as they do not fit neatly into existing legal categories. The core challenge lies in determining whether stablecoins should be treated as money, securities, commodities, or something else entirely. This classification has significant implications for which regulators have jurisdiction, and which legal rules apply. In many jurisdictions, a key issue is whether a stablecoin qualifies as a security.

In the European Union, the Markets in Crypto-Assets Regulation (MiCA) resolves this ambiguity to a large extent by creating new categories specifically for stablecoins: “e-money tokens” and “asset-referenced tokens”. E-money tokens are those that are pegged to a single currency and resemble traditional electronic money under the E-Money Directive. Asset-referenced tokens are broader and can include tokens backed by baskets of currencies or commodities. This approach avoids trying to fit stablecoins into outdated categories like securities or commodities and instead regulates them on their own terms.

In the UK, the Financial Conduct Authority (FCA) does not generally treat fiat-backed stablecoins as securities unless they exhibit investment characteristics. However, the upcoming regulatory framework under the Financial Services and Markets Act 2023 will grant the Bank of England and FCA more tools to supervise stablecoins used for payments. At present, August 2025, they have not published any regulations yet.

In the United States, the Securities and Exchange Commission (SEC) has suggested that certain stablecoins, particularly those offering interest-bearing features or tied to investment mechanisms, may fall under the definition of securities. However, fiat-backed payment stablecoins like USDC or USDP, which simply maintain a 1:1 peg to a currency and do not generate returns for holders, are more often considered outside the scope of securities regulation. At the same time, the Commodity Futures Trading Commission (CFTC) has taken the position that some stablecoins may qualify as commodities. In a 2023 enforcement action, the CFTC referred to tethered assets like USDT as commodities under the Commodity Exchange Act. This has added to the regulatory uncertainty in the U.S., where overlapping authorities and inconsistent classifications have left issuers and users in legal limbo.

In essence, the legal qualification of stablecoins hinges on their structure and function. If they are used for payments and are fully backed by fiat currency reserves, they are more likely to be treated as payment instruments or e-money. If they are algorithmic, generate returns, or have speculative components, they may fall under securities or commodities laws. Regulatory frameworks are required to resolve the ambiguity and uncertainty stablecoin issuers face. Which brings us to …

Regulatory Frameworks

These days, the regulation of stablecoins is rapidly evolving. Regulatory initiatives focus on concerns about consumer protection, financial stability, and the risks of unregulated digital assets. Both the European Union and the United States have recently introduced or implemented significant legislative frameworks to address these concerns.

As mentioned above, in the European Union, stablecoins fall under the Markets in Crypto-Assets Regulation (MiCA). MiCA was formally adopted in 2023 and began phasing in from June 2024. MiCA distinguishes between different types of crypto assets. It introduces specific provisions for “e-money tokens” (which are pegged to a single fiat currency) and “asset-referenced tokens” (which may be backed by a basket of assets or commodities). Issuers of these stablecoins are required to obtain authorization from national competent authorities and must meet stringent governance, capital, and reserve requirements. MiCA also imposes obligations on crypto-asset service providers, ensuring oversight of issuance, custody, and trading. The European Central Bank has highlighted the importance of this framework to prevent the fragmentation of the digital finance market and to protect consumers.

In the United States, after years of regulatory ambiguity, Congress has recently made progress toward a unified approach. In July 2024, the Clarity for Payment Stablecoins Act was passed by the House Financial Services Committee and gained bipartisan traction. This bill focuses specifically on payment stablecoins, such as those issued by Circle (USDC) and Paxos (USDP), and introduces a clear licensing regime. Under this legislation, stablecoin issuers must either be state-licensed nonbank entities or federally approved institutions regulated by the Federal Reserve. The bill also imposes strict reserve backing requirements, limits on rehypothecation of reserve assets, and detailed disclosure obligations to increase transparency. In July 2025, the Genius Act – the first federal regulatory framework for stablecoins – was passed in Congress. It creates a new licensing regime for payment stablecoin issuers and is the first major crypto-related legislation to be passed by both chambers of Congress. The bill was signed into law on 18 July 2025.

Regulators in both areas understood that stablecoins might have a big impact once they become widely used. In the EU, MiCA includes special oversight mechanisms for “significant” stablecoins, allowing the European Banking Authority to step in. Similarly, in the U.S., the President’s Working Group on Financial Markets believes the federal government needs to regulate companies that issue stablecoins, especially the big ones that process lots of payments.

Outside the EU and U.S., countries like Japan and the UK are also catching up. Japan already passed a law in 2022 that allows only licensed banks and trust companies to issue stablecoins, while the UK’s Financial Services and Markets Act 2023 granted the Bank of England new powers to oversee systemic digital settlement assets, including fiat-backed stablecoins.

Other risks and legal issues with Stablecoins

Apart from classification and regulatory frameworks, stablecoins raise several other legal issues and risks. These have to do with financial stability, consumer protection, monetary sovereignty, and data governance. These concerns are particularly significant given the potential for stablecoins to scale rapidly across borders and integrate with mainstream financial services.

A first issue is the operational risk, especially the risk of technical failure, cyberattacks, or fraud within the stablecoin infrastructure. Since most stablecoins rely on centralized issuers or custodians, the reliability of reserve management and smart contracts is critical. A failure in these systems could cause a loss of peg, mass redemptions, or loss of user funds. In the previous article we mentioned the TerraUSD’s collapse in 2022, which was algorithmic stablecoin. Its collapse exposed how vulnerabilities in design can destabilize not only a single token but also the broader market. The US Financial Stability Board (FSB) has emphasized the importance of robust governance and risk management frameworks to prevent such collapses. Its October 2023 report outlines these concerns in detail.

Another legal concern is redemption rights. Users need clear, enforceable rights to redeem stablecoins for fiat currency on demand. In practice, many stablecoin issuers include disclaimers or reserve the right to delay or deny redemptions under certain conditions. This raises questions about contractual enforceability and consumer protection, particularly in jurisdictions without clear legal protections for token holders. The IMF has raised similar concerns in its global policy papers, especially when stablecoins operate across borders where legal remedies may be unclear or unenforceable.

There are also anti-money laundering (AML) and counter-terrorist financing (CTF) concerns. Stablecoins offer a relatively stable value and fast, borderless transfers, which make them attractive for illicit use. Many stablecoin platforms operate with limited KYC (Know Your Customers) procedures or allow anonymous transfers via decentralized protocols. Regulators have warned that this can undermine AML frameworks and create enforcement gaps.

Another major legal issue is monetary sovereignty. Central banks have raised concerns that widespread use of privately issued stablecoins could erode control over national currencies and monetary policy, especially in developing countries. If a stablecoin pegged to the US dollar becomes a dominant means of payment in another country, it can cause de facto dollarization and limit a central bank’s ability to manage inflation or respond to economic shocks.

Finally, data privacy and surveillance pose emerging legal and ethical challenges. Stablecoin providers often collect and process sensitive personal and financial data. In jurisdictions like the EU, such processing is subject to the General Data Protection Regulation (GDPR). But questions remain about how decentralized systems can comply with data minimization, user consent, and the right to erasure. Moreover, law enforcement access to stablecoin transaction data creates a tension between privacy rights and regulatory compliance.

Together, these issues show that the legal issues regarding stablecoins involves much more than just classification or licensing. Since stablecoins touch on financial law, contracts, data protection, monetary policy, and consumer rights, both companies and users face significant legal risks until we get better, more coordinated regulations worldwide.

 

Sources:

Legal technology predictions for 2025

At the end of the year and the beginning of a new one, many publications give their predictions for the new year. In this article, we will go over a selection of legal technology predictions for 2025. We can group them in four categories: legal technology predictions that do not involve AI, predictions on legal issues involving AI, predictions on AI in legal services, and finally, some other legal technology predictions on AI.

Legal technology predictions that do not involve AI

While most of the authors focus on the growing impact of AI, there also are legal technology predictions that do not involve it.

A first set of predictions has to do with client demands. Authors anticipate a significant further proliferation of blockchain, cryptocurrencies, and smart contracts. This will result in a growing demand for lawyers who are versed in these matters. Experts also predict that clients’ expectations will keep on rising, and that law firms will have to adapt to that demand. Already, the legal industry is witnessing a shift towards more client-centric services. Overall, experts also predict a growing demand for legal services for SMBs.

A second set of predictions has to do with the investments law firms will be making. Experts predict an overall increase in investments in technology, and more specifically, apart from AI, increases in spending on knowledge management and on cybersecurity.

Cybersecurity remains a critical concern for law firms, especially with the growing reliance on digital tools and AI. The sector is expected to invest more in cyber resilience strategies to counter potential threats, ensuring the protection of sensitive legal data and maintaining client trust. General counsels and Chief Legal Officer need to up their game when it comes to cybersecurity.

Finally, experts expect the billable hour to further decline, and fixed fees and subscription billing to increase.

Predictions on legal issues involving AI

Several authors also focus on legal issues involving AI. On the one hand, there is the topic of regulating AI, and on the other hand, there is the topic of litigation.

Both the EU and the Council of Europe (CoE) published their frameworks on regulating AI. Unlike the EU AI Act, the Council of Europe’s Treaty is open to all countries who want to sign up. More sign-ups are expected. When it comes to the US, the situation is unclear, as the incoming Trump administration may withdraw from the CoE Treaty. Most experts do not expect the Trump administration to impose its own framework. Several authors do see initiatives on both a state level and on the level of local bar associations. The latter may impose ethical rules regarding the use of AI in law firms, especially when it comes to lawyers using generative AI.

There also is an anticipated increase in litigation related to AI tools and practices. One of the areas where experts predict more litigation involves the disputes over unauthorized use of copyrighted materials for AI training. They also expect an increase of product liability lawsuits involving AI-systems. And an increase in litigation is also anticipated when it comes to AI-induced biases in processes like job screening, and potential antitrust violations stemming from AI-driven pricing tools.

Predictions on AI in legal services

Most of the predictions, however, focus on how Artificial Intelligence will impact the delivery of legal services. And the topic that is most talked about is the introduction of AI agents in the delivery of legal service. Some call it the most important evolution for 2025.

So, what are we talking about? An AI agent is a software program designed to operate independently, perceiving its environment, analysing information, and taking actions to achieve specific goals. It gathers data through sensors or input systems, processes this data using logic or machine learning models, and performs tasks or interacts with its surroundings based on its objectives. These agents are widely used in applications such as virtual assistants, self-driving cars, and automated decision-making systems, allowing them to function without constant human intervention. So, you can think of them as the next generation, more advanced and more versatile bots. And in 2025, they’re expected to have a huge impact on the delivery of legal services and on the way that law firms and legal departments operate. We will discuss AI agents more in depth in a follow-up article.

AI is also become more integrated in all aspects of the delivery of legal services, from optimizing and automating workflows, enhancing knowledge management, and handling specific tasks autonomously. Most experts anticipate that all cloud-based software for lawyers and law firms will be integrating more AI into their systems. Overall, authors also predict that generative AI will become better and more specialized in specific legal areas.

Several authors talk about how artificial intelligence is already leading to a sharp increase in productizing legal services. This applies to law firms, legal departments, but also to alternative legal service providers. Some expect hybrid lawyers and/or self-service legal platforms to become as ubiquitous as online banking. Some even anticipate that more and more lawyers will start collaborating with robot lawyers. And for the first time, some even predict that within 5 years, the combination of the advances in AI and breakthroughs in quantum computing will start replacing entry level lawyers.

Other legal technology predictions on AI

Some experts also made some other legal technology predictions on AI. They are optimistic that Generative AI will improve access to justice, and that we will see courts who will start using Generative AI, as well to become more effective.  They also expect a consolidation movement in the market of legal technology service providers. Finally, some expect that Legal AI and Generative AI will become part of law school curriculum.

 

Sources:

 

The EU AI Act

In previous articles, we discussed the dangers of AI and the need for AI regulations. On 13 March 2024, the European Parliament approved the “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.” The act was a proposal by the EU Commission of 21 April 2021. The act is usually referred to by its short name of the Artificial Intelligence Act, or the EU AI Act. In this article we look at the following questions: what is the EU AI Act? What is the philosophy of the EU AI Act? We discuss the limited risk applications and the high-risk applications. Finally, we also look at the EU AI Act’s entry into force and the penalties.

What is the EU AI Act?

As the full title suggests, it is a regulation that lays down harmonised rules on artificial intelligence across the EU. Rather than focusing on specific applications, it deals with the risks that applications pose, and categorizes them accordingly. The Act imposes stringent requirements for high-risk categories to ensure safety and fundamental rights are upheld. The Act’s recent endorsement follows a political agreement reached in December 2023, reflecting the EU’s commitment to a balanced approach that fosters innovation while addressing ethical concerns.

Philosophy of the EU AI Act

The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular for small and medium-sized enterprises (SMEs). The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.

The AI Act ensures that Europeans can trust what AI has to offer. Because AI applications and frameworks can change rapidly, the EU chose to address the risks that applications pose. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes. The Act distinguishes four levels of risk:

  • Unacceptable risk: applications with unacceptable risk are never allowed. All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
  • High risk: to be allowed high risk applications must meet stringent requirements to ensure safety and fundamental rights are upheld. We have a look at those below.
  • Limited risk: applications are considered to pose limited risk when they lack transparency, and the users of the application may not know what their data are used for or what that usage implies. Limited risk applications can be allowed if they comply with specific transparency obligations.
  • Minimal or no risk: The AI Act allows the free use of minimal-risk and no risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

Let us have a closer look at the limited and high-risk applications.

Limited Risk Applications

As mentioned, limited risk refers to the risks associated with a lack of transparency. The AI Act introduces specific obligations to ensure that humans are sufficiently informed when necessary. E.g., when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can make an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes.

High Risk Applications

Under the EU AI Act, high-risk AI systems are subject to strict regulatory obligations due to their potential impact on safety and fundamental rights.

What are high risk applications?

As mentioned, all AI systems considered a clear threat to the safety, livelihoods and rights of people are considered high-risk. These systems are classified into three main categories: a) those covered by EU harmonisation legislation, b) those that are safety components of certain products, and c) those listed as involving high-risk uses. Specifically, high-risk AI includes applications in critical infrastructure, such as traffic control and utilities management, biometric and emotion recognition systems. It also applies to AI used in education and employment for decision-making processes.

What are the requirements for high-risk applications?

High-risk AI systems under the EU AI Act are subject to a comprehensive set of requirements designed to ensure their safety, transparency, and compliance with EU standards. These systems must have a risk management system in place to assess and mitigate potential risks throughout the AI system’s lifecycle. Data governance and management practices are crucial to ensure the quality and integrity of the data used by the AI, including provisions for data protection and privacy. Providers must also create detailed technical documentation that covers all aspects of the AI system, from its design to deployment and maintenance.

Furthermore, high-risk AI systems require robust record-keeping mechanisms to trace the AI’s decision-making process. This is essential for accountability and auditing purposes. Transparency is another key requirement, necessitating clear and accessible information to be provided to users and ensuring they understand the AI system’s capabilities and limitations. Human oversight is mandated to ensure that AI systems do not operate autonomously without human intervention, particularly in critical decision-making processes. Lastly, these systems must demonstrate a high level of accuracy, robustness, and cybersecurity to prevent errors and protect against threats.

The EU AI Act’s entry into force

The act will enter into force two years after it was approved, i.e., on 13 March 2026. This gives member states the opportunity to implement compliant legislation. It also gives providers a two-year window to adapt to the regulation.

The European AI Office, established in February 2024 within the Commission, will oversee the AI Act’s enforcement and implementation with the member states.

Penalties

The EU AI Act enforces a tiered penalty system to ensure compliance with its regulations. For the most severe violations, particularly those involving prohibited AI systems, fines can reach up to €35 million or 7% of the company’s worldwide annual turnover, whichever is higher. Lesser offenses, such as providing incorrect, incomplete, or misleading information to authorities, may result in penalties up to €7.5 million or 1% of the total worldwide annual turnover. These fines are designed to be proportionate to the nature of the infringement and the size of the entity, reflecting the seriousness of non-compliance within the AI sector.

Conclusion

The EU AI Act represents a significant step in the regulation of artificial intelligence within the EU. It sets a precedent as the first comprehensive legal framework on AI worldwide. The Act mandates a high level of diligence, including risk assessment, data quality, transparency, human oversight, and accuracy for high-risk systems. Providers and deployers must also adhere to strict requirements regarding registration, quality management, monitoring, record-keeping, and incident reporting. This framework aims to ensure that AI systems operate safely, ethically, and transparently within the EU. Through these efforts, the European AI Office strives to position Europe as a leader in the ethical and sustainable development of AI technologies.

 

Sources:

 

The dangers of artificial intelligence

Artificial intelligence (AI) is a powerful technology that can bring many benefits to society. However, AI also poses significant risks and challenges that need to be addressed with caution and responsibility. In this article, we explore the questions, “What are the dangers of artificial intelligence?”, and “Does regulation offer a solution?”

The possible dangers of artificial intelligence have been making headlines lately. First, Elon Musk and several experts called for a pause in the development of AI. They were concerned that we could lose control over AI considering how much progress has been made recently. They expressed their worries that AI could pose a genuine risk to society. A second group of experts, however, replied that Musk and his companions were severely overstating the risks involved and labelled them “needlessly alarmist”. But then a third group again warned of the dangers of artificial intelligence. This third group included people like Geoffrey Hinton, who has been called the godfather of AI. They even explicitly stated that AI could lead to the extinction of humankind.

Since those three groups stated their views, many articles have been written about the dangers of AI. And the calls to regulate AI have become louder than ever before. (We published an article on initiatives to regulate AI in October 2022). Several countries have started taking initiatives.

What are the dangers of artificial intelligence?

So, what are the dangers of artificial intelligence? As with any powerful technology, it can be used for nefarious purposes. It can be weaponized and used for criminal purposes. But even the proper use of AI holds inherent risks and can lead to unwanted consequences. Let us have a closer look.

A lot of attention has already been paid in the media to the errors, misinformation, and hallucinations of artificial intelligence. Tools like ChatGPT are programmed to sound convincing, not to be accurate. It gets its information from the Internet, but the Internet contains a lot of information that is not correct. So, its answers will reflect this. Worse, because it is programmed to provide any answer if it can, it sometimes just makes things up. Such instances have been called hallucinations. In a lawsuit in the US, e.g., a lawyer had to admit that the precedents he had quoted did not exist and were fabricated by ChatGPT. (In a previous article on ChatGPT, we warned that any legal feedback it gives must be double-checked).

As soon as ChatGPT became available, cybercriminals started using it to their advantage. A second set of dangers therefore has to do with cybercrime and cybersecurity threats: AI can be exploited by malicious actors to launch sophisticated cyberattacks. This includes using AI algorithms to automate and enhance hacking techniques, identify vulnerabilities, and breach security systems. Phishing attacks have also become more sophisticated and harder to detect.

AI can also be used for cyber espionage and surveillance: AI can be employed for sophisticated cyber espionage activities, including intelligence gathering, surveillance, and intrusion into critical systems. Related to this is the risk of invasion of privacy and data manipulation. AI can collect and analyse massive amounts of personal data from various sources, such as social media, cameras, sensors, and biometrics. This can enable AI to infer sensitive information about people’s identities, preferences, behaviours, and emotions. AI can also use this data to track and monitor people’s movements, activities, and interactions. This can pose threats to human rights, such as freedom of expression, association, and assembly.

Increased usage of AI will also lead to the loss of jobs due to automation. AI can perform many tasks faster and cheaper than humans, which will lead to unemployment and inequality. An article on ZD Net estimates that AI could automate 300 million jobs. Approximately 28% of current jobs could be at risk.

There also is a risk of loss of control. As AI systems become more powerful, there is a risk that we will lose control over them. This could lead to AI systems making decisions that are harmful to humans, such as launching a nuclear attack or starting a war. This risk of the loss of control is a major concern about the weaponization of AI. As AI technology advances, there is a worry that it could be weaponized by state or non-state actors. Autonomous weapon systems equipped with AI could potentially make lethal decisions without human intervention, leading to significant ethical and humanitarian concerns.

We already mentioned errors, misinformation, and hallucinations. Those are involuntary side-effects of AI.  A related danger of AI is the deliberate manipulation and misinformation of society through algorithms. AI can generate realistic and persuasive content, such as deepfakes, fake news, and propaganda, that can influence people’s opinions and behaviours. AI can also exploit people’s psychological biases and preferences to manipulate their choices and actions, such as online shopping, voting, and dating.

Generative AI tends to use existing data as its basis for creating new content. But this can cause issues of infringement of intellectual property rights. (We briefly discussed this in our article on generative AI).

Another risk inherent to the fact that AI learns from large datasets is bias and discrimination. If this data contains biases, then AI can amplify and perpetuate them. This poses a significant danger in areas such as hiring practices, lending decisions, and law enforcement, where biased AI systems can lead to unfair outcomes. And if AI technologies are not accessible or affordable for all, they could exacerbate existing social and economic inequalities.

Related to this are ethical implications. As AI systems become more sophisticated, they may face ethical dilemmas, such as decisions involving human life or the prioritization of certain values. Think, e.g., of self-driving vehicles when an accident cannot be avoided: do you sacrifice the driver if it means saving more lives? It is crucial to establish ethical frameworks and guidelines for the development and deployment of AI technologies. Encouraging interdisciplinary collaboration among experts in technology, ethics, and philosophy can help navigate these complex ethical challenges.

At present, there is insufficient regulation regarding the accountability and transparency of AI. As AI becomes increasingly autonomous, accountability and transparency become essential to address the potential unintended consequences of AI. In a previous article on robot law, we asked the question who is accountable when, e.g., a robot causes an accident. Is it the manufacturer, the owner, or – as AI becomes more and more self-aware – could it be the robot? Similarly, when ChatGPT provides false information, who is liable? In the US, Georgia radio host Mark Walters found that ChatGPT was spreading false information about him, accusing him of embezzling money. So, he is suing OpenAI, the creators of ChatGPT.

As the abovementioned example of the lawyer quoting non-existing precedents illustrated, there also is a risk of dependence and overreliance: Relying too heavily on AI systems without proper understanding or human oversight can lead to errors, system failures, or the loss of critical skills and knowledge.

Finally, there is the matter of superintelligence that several experts warn about. They claim that the development of highly autonomous AI systems with superintelligence surpassing human capabilities poses a potential existential risk. The ability of such systems to rapidly self-improve and make decisions beyond human comprehension raises concerns about control and ethical implications. Managing this risk requires ongoing interdisciplinary research, collaboration, and open dialogue among experts, policymakers, and society at large. On the other hand, one expert said that it is baseless to automatically assume that superintelligent AI will become destructive, just because it could. Still, the EU initiative includes the requirement of building in a compulsory kill switch that allows to switch the AI off at any given moment.

Does regulation offer a solution?

In recent weeks, several countries have announced initiatives to regulate AI. The EU already had its own initiative. At the end of May, its tech chief Margrethe Vestager said she believed a draft voluntary code of conduct for generative AI could be drawn up “within the next weeks”, with a final proposal for industry to sign up “very, very soon”. The US, Australia, and Singapore also have submitted proposals to regulate AI.

Several of the abovementioned dangers can be addressed through regulation. Let us go over some examples.

Regulations for cybercrime and cybersecurity should emphasize strong cybersecurity measures, encryption standards, and continuous monitoring for AI-driven threats.

To counter cyber espionage and surveillance risks, we need robust cybersecurity practices, advanced threat detection tech, and global cooperation to share intelligence and establish norms against cyber espionage.

Privacy and data protection regulations should enforce strict standards, incentivize secure protocols, and impose severe penalties for breaches, safeguarding individuals and businesses from AI-enabled cybercrime.

To prevent the loss of jobs, societies need to invest in education and training for workers to adapt to the changing labour market and create new opportunities for human-AI collaboration.

Addressing AI weaponization requires international cooperation, open discussions, and establishing norms, treaties, or agreements to prevent uncontrolled development and use of AI in military applications.

To combat deepfakes and propaganda, we must develop ethical standards and regulations for AI content creation and dissemination. Additionally, educating people on critical evaluation and information verification is essential.

Addressing bias and discrimination involves ensuring diverse and representative training data, rigorous bias testing, and transparent processes for auditing and correcting AI systems. Ethical guidelines and regulations should promote fairness, accountability, and inclusivity.

When it comes to accountability and transparency, regulatory frameworks can demand that developers and organizations provide clear explanations of how AI systems make decisions. This enables better understanding, identification of potential biases or errors, and the ability to rectify any unintended consequences.

At the same time, regulation also has its limitations. While it is important, e.g., to regulate things like cybercrime or the weaponization of AI, it is also clear that the regulation will not put an end to these practices. After all, by definition, cybercriminals don’t tend to care about any regulations. And despite the fact that several types of weapons of mass destruction have been outlawed, it is also clear that they are still being produced and used by several actors. But regulation does help to keep the trespassers accountable.

It is also difficult to assess how disruptive the impact of AI will be on society. Depending on how disruptive it is, additional measures may be needed.

Conclusion

We have reached a stage where AI has become so advanced that it will change the world and the way we live. This is already creating issues that need to be addressed. And as with any powerful technology, it can be abused. Those risks, too, need to be addressed. But while we must acknowledge these issues, it should also be clear that the benefits outweigh the risks, as long as we don’t get ahead of ourselves. At present, humans abusing AI are a greater danger than AI itself.

 

Sources:

 

Artificial Intelligence Regulation

In previous articles, we have discussed how artificial intelligence is biased, and on how this problem of biased artificial intelligence persists. As artificial intelligence (AI) is becoming ever more prevalent, this poses many ethical problems. The question was raised whether the industry could be trusted to self-regulate or whether legal frameworks would be necessary. In this article, we explore current initiatives for Artificial Intelligence regulation. We look at initiatives within the industry to regulate artificial intelligence as well as at attempts to create legal frameworks for Artificial Intelligence. But first we investigate why regulation is necessary.

Why is Artificial Intelligence Regulation necessary?

Last year, the Council of Europe published a paper where it concluded that a legal framework was needed because there were substantive and procedural gaps. UNESCO, too, identified key issues in its Recommendation on Ethics in Artificial Intelligence. Similarly, in its White Paper on Trustworthy AI, The Mozilla Foundation identifies a series of key challenges that need to be addressed and that makes regulation desirable. These are:

  • Monopoly and centralization: Large-scale AI requires a lot of resources and at present only a handful of tech giants have those. This has a stifling effect on innovation and competition.
  • Data privacy and governance:  Developing complex AI systems necessitates vast amounts of data. Many AI systems that are currently being developed by large tech companies harvest people’s personal data through invasive techniques, and often without their knowledge or explicit consent.
  • Bias and discrimination: As was discussed in previous articles, AI relies on computational models, data, and frameworks that reflect existing biases. This in turn results in biased or discriminatory outcomes.
  • Accountability and transparency: Many AI systems just present an outcome without being able to explain how that result was reached. This can be the product of the algorithms and machine learning techniques that are being used, or it may be by design to maintain corporate secrecy. Transparency is needed for accountability and to allow third-party validation.
  • Industry norms: Tech companies tend to build and deploy tech rapidly. As a result, many AI systems are embedded with values and assumptions that are not questioned in the development cycle.
  • Exploitation of workers: Research shows that tech workers who perform the invisible maintenance of AI are vulnerable to exploitation and overwork.
  • Exploitation of the environment: The amount of energy needed for AI data mining makes it very environment unfriendly. The development of large AI systems intensifies energy consumption and speeds up the extraction of natural resources.
  • Safety and security: Cybercriminals have embraced AI. They are able to carry out increasingly sophisticated attacks by exploiting AI systems.

For all these reasons, the regulation of AI is necessary. Many large tech companies still promote the idea that the industry should be allowed to regulate itself. Many countries, as well as the EU, on the other hand believe the time is ripe for governments to impose a legal framework to regulate AI.

Initiatives within the industry to regulate Artificial Intelligence

Firefox and the Mozilla Foundation

The Mozilla Foundation is one of the leaders in the field when it comes to promoting trustworthy AI. They already have launched several initiatives, including advocacy campaigns, responsible computer science challenges, research, funds, and fellowships. The Foundation also points out that “developing a trustworthy AI ecosystem will require a major shift in the norms that underpin our current computing environment and society. The changes we want to see are ambitious, but they are possible.” They are convinced that the “best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments, and organizations around the world.”

IBM

IBM, too, promotes an ethical and trustworthy AI, and has created its own ethics board. It believes AI should be built on the following principles:

  • The purpose of AI is to augment human intelligence
  • Data and insights belong to their creator
  • Technology must be transparent and explainable

To that end, it identified five pillars:

  • Explainability: Good design does not sacrifice transparency in creating a seamless experience.
  • Fairness: Properly calibrated, AI can assist humans in making fairer choices.
  • Robustness: As systems are employed to make crucial decisions, AI must be secure and robust.
  • Transparency: Transparency reinforces trust, and the best way to promote transparency is through disclosure.
  • Privacy: AI systems must prioritize and safeguard consumers’ privacy and data rights.

Google

Google says it “aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good.

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias
  3. Be built and tested for safety
  4. Be accountable to people
  5. Incorporate privacy design principles
  6. Uphold high standards of scientific excellence
  7. Be made available for uses that accord with these principles.”

It also made it clear that it “will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.”

It adds that that list may evolve.

Still, Google seems to have a troubled relationship with ethical AI. It notoriously fired its entire ethics board in 2019, to replace it with a team of ethical AI researchers. When subsequently, on separate occasions, two of those were fired too, it again made headlines.

Facebook / Meta

Whereas others talk about trustworthy and ethical Ai, Meta (the parent company of Facebook) on the other hand has different priorities and talks about responsible AI. It, too, identifies five (or ten) pillars:

  1. Privacy & Security
  2. Fairness & Inclusion
  3. Robustness & Safety
  4. Transparency & Control
  5. Accountability & Governance

Legal frameworks for Artificial Intelligence

Apart from those initiatives within the industry, there are proposals for legal frameworks as well. Best known is the EU AI Act. Others are following suit.

The EU AI Act

The EU describes its AI act as “a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”

The text can be misleading as, effectively, the proposal distinguishes not three but four levels of risk for AI applications: 1) unacceptable risk, which are banned, 2) high-risk, which must be regulated with specific legal requirements, 3) low risk, where most of the time no regulation will be necessary, and 4) no risk, which do not have to be regulated at all.

By including an ‘unacceptable risk‘ category, the proposal introduces the idea that certain types of AI applications should be forbidden because they violate basic human rights. All applications that manipulate human behaviour to deprive users of their free will, as well as systems that allow social scoring fall in this category. Exceptions are allowed for military purposes and law enforcement purposes.

High risk systems “include biometric identification, management of critical infrastructure (water, energy etc), AI systems intended for assignment in educational institutions or for human resources management, and AI applications for access to essential services (bank credits, public services, social benefits, justice, etc.), use for police missions as well as migration management and border control.” Again, there are exceptions, many of which have to do with cases where biometric identification is allowed. These include, e.g., missing children, suspects of terrorism, trafficking, and child pornography. The EU wants to create a database that keeps track of all high-risk applications.

Limited risk or low risk applications includes various bots which companies use to interact with their customers. The idea here is that transparency is required. Users must know, e.g., that they are interacting with a chat bot and to what information the chat bot has access.

All AI systems that do not pose any risk to citizen’s rights are considered no risk applications for which no regulation is necessary. These applications include games, spam filters, etc.

Who does the EU AI Act apply to? As is the case with the GDPR, the EU AI Act does not apply exclusively to EU-based organizations and citizens. It also applies to anybody outside of the EU who is offering an AI application (product or service) within the EU, or if an AI system uses information about EU citizens or organizations. Furthermore, it also applies to systems outside of the EU that use results that are generated by AI systems within the EU.

A work in progress: the EU AI Act is still very much a work in progress. The Commission made its proposal and now the legislators can give feedback. At present, more than a thousand amendments have been submitted. Some factions think the framework goes too far, while others claim it does not go far enough. Much of the discussions deal with both defining and categorizing AI systems.

Other noteworthy initiatives

Apart from the European AI Act, there are some other noteworthy initiatives.

Council of Europe: The Council of Europe (responsible for the European Convention on Human Rights) created its own Ad Hoc Committee on Artificial Intelligence. This Ad Hoc Committee published a paper in 2021, called A Legal Framework for AI Systems. The paper was a feasibility study explored the reasons as to why a legal framework on the development, design, and application of AI, based on Council of Europe’s standards on human rights, democracy and the rule of law is needed. It identified several substantive and procedural gaps and concluded that a comprehensive legal framework is needed, combining both binding and non-binding instruments.

UNESCO published a series of Recommendations on Ethics of Artificial Intelligence, which were endorsed by 193 countries in November 2021.

US: On 4 October, the White House released the Blueprint for an AI Bill of Rights to set up a framework that can protect people from the negative effects of AI.

No government initiatives exist yet in the UK. But Cambridge University, on 16 September 2022, published a paper on A Legal Framework for Artificial Intelligence Fairness Reporting.

 

Sources: