Tag Archives: artificial intelligence

Generative AI

In a previous article, we talked about ChatGPT. It is a prime example of generative AI (artificial intelligence). In this article, we will explore generative ai a bit more in detail. We’ll answer questions like, “What is Generative AI?”, “Why is Generative AI important?”, “What can it do?”, “What are the downsides?”, and “What are the Generative AI applications for lawyers?”.

What is Generative AI?

A website dedicated to generative AI defines it as “the part of Artificial Intelligence that can generate all kinds of data, including audio, code, images, text, simulations, 3D objects, videos, and so forth. It takes inspiration from existing data, but also generates new and unexpected outputs, breaking new ground in the world of product design, art, and many more.” (generativeai.net)

The definition Sabrina Ortiz gives on ZDNet is complementary: “All it refers to is AI algorithms that generate or create an output, such as text, photo, video, code, data, and 3D renderings, from data they are trained on. The premise of generative AI is to create content, as opposed to other forms of AI, which might be used for other purposes, such as analysing data or helping to control a self-driving car.” As such, Generative AI is a type of machine learning that is specifically designed to create (generate) content.

Two types of generative AI have been making headlines. There are programs that can create visual art, like Midjourney or DALL-E2. And there are applications like ChatGPT that can generate almost any desired text output and excels in conversation in natural language.

Why is Generative AI important?

Generative AI is still in its early stages and already it can perform impressive tasks. As it grows and becomes more powerful, it will fundamentally change the way we operate and live. Many experts agree it will have an impact that is at least as big as the introduction of the Internet. Just think of how much the Internet has become of our daily lives. Generative AI, too, is expected to become fully integrated into our lives. And it is expected to do so quickly. One expert predicts that on average we will have new and twice as powerful generative AI systems every 18 months. Only four months after ChatGPT 3.5 was released, on 14 March 2023, a new, more powerful, more accurate, and more sophisticated version 4.0 was released. The new version is a first step towards a multimodal generative AI, i.e., one that can work with several media simultaneously: text, graphics, video, audio. It can create output of over 25 000 words of text, which allows it to be more creative and collaborative. And it’s safer and faster.

Let us next have a look at what generative AI can already do, and what it will be able to do soon.

What can it do?

One of the first areas where generative AI was making major breakthroughs was to create visual art. Sabrina Ortiz explains, “Generative AI art is created by AI models that are trained on existing art. The model is trained on billions of images found across the internet. The model uses this data to learn styles of pictures and then uses this insight to generate new art when prompted by an individual through text.” These are five free AI art generators that you can try out for yourself:

We already know from our previous article that ChatGPT can create virtually any text output. It can write emails and other correspondence, papers, a range of legal documents including contracts, programming code, episodes of TV series, etc. It can assist in research, make summaries of text, describe artwork, etc.

More and more search engines are starting to use generative AI as well. Bing, DuckDuckGo, and You.com, e.g., all already have a chat interface. When you ask a question, you get an answer in natural language, instead of a list of URLs. Bing even gives the references that it based its feedback on. Google is expected to launch its own generative AI enabled search engine soon.

More specifically to programming, one of the major platforms for developers (GitHub) announced it now has an AI Copilot for Business which is an AI-powered developer tool that can write code, debug and give feedback on existing code. It can solve any issues it may detect in the code.

Google’s MusicLM already can write music upon request, and the new ChatGPT version 4 announced a similar offering, too. YouTube also has announced that it will start offering generative AI assistance for video creation.

Generative AI tools can be useful writing assistants. The article on g2.com, mentioned in the sources, lists 48 free writing assistants, though many of them use a freemium model. Writer’s block may soon be a thing of the past, as several of these writing assistants only need a key word to start producing a first draft. You even get to choose the writing style.

Generative AI can also accelerate scientific research and increase our knowledge. It can, e.g., lower healthcare costs and speed up drug development.

In Britain, a nightclub successfully organized a dance event where the DJ was an AI bot.

All existing chatbots can get an upgrade where they will become far better at natural language conversations. And generative AI integrated with the right customer processes will improve customer experience.

As you can see, even though we’re only at the beginning of the generative AI revolution, the possibilities are endless.

What are the downsides?

At present, generative AI tools are mostly tools that assist. The output needs to be supervised. Sometimes, ChatGPT, e.g., gives incorrect answers. Worse, it can just make things up, and an experiment with a legal chatbot discovered that the bot just started lying because it had concluded that that was the most effective way to get the desired end result. So, there are no guarantees that the produced output is correct. And the AI system does not care whether what it does is morally or legally acceptable. Extra safeguards will have to be built in, which is why there are several calls to regulate AI.

There also is an ongoing debate about intellectual property rights. If a program takes an existing image and merely applies one or more filters, does this infringe on the intellectual property of the original artist? Where do you draw the line? And who owns the copyright on what generative AI creates? If, e.g., a pharmaceutical company uses an AI tool to create a new drug, who can take a patent? Is it the pharmaceutical company, the company that created the AI tool, or the AI tool itself?

And as generative AI becomes better, it will transform the knowledge and creative marketplaces, which will inevitably lead to the loss of jobs.

Generative AI applications for lawyers

As a result of the quick progress in generative AI, existing legal chatbots are already being upgraded. A first improvement has to do with user convenience and user-friendliness because users can now interact with the bots through a natural language interface. The new generation of bots understand more and are also expected to become faster, safer, and more accurate. The new ChatGPT 4 scored in the 90th percentile for the bar exams, where ChatGPT 3 – only a few months earlier – barely passed some exams.

Virtual Legal Assistants (VLA) are getting more and more effective in:

  • Legal research
  • Drafting, reviewing, and summarizing legal documents: contracts, demand letters, discovery demands, nondisclosure agreements, employment agreements, etc.
  • Correspondence
  • Creative collaboration
  • Brainstorming, etc.

As mentioned before, at present these AI assistants are just that, i.e., assistants. They can create draft versions of legal documents, but those still need revision by an actual human lawyer. These VLAs still make errors. But at the same time, they can already considerably enhance productivity by saving you a lot of time. And they are getting better and better fast, as the example of the bar exams confirms.

 

Sources:

 

Legal Technology Predictions for 2023

Towards the end of every calendar year, the American Bar Association publishes the results of its annual legal technology survey. Several legal service providers, experts, and reporters, too, analyse existing trends and subsequently make their own legal technology predictions for 2023. Some items stand out that most pay attention to. In this article, we will look at automation, artificial intelligence, cloud-native solutions, virtual legal assistants, data privacy and cybersecurity, crypto technologies, blockchain, and smart contracts. We will briefly pay attention to some other trends, as well.

Automation

Automation keeps being a major driver of change in many industries. The legal sector is no exception, even though it lags compared to many other sectors. Lawyers seem to take longer to catch up that automation is beneficial. It is making many processes in the legal industry faster, more efficient, and less expensive. Automation has proven to be successful in fields like legal research, e-discovery and document review and management. In 2023, we can expect to see this trend continue, with a renewed focus on automating the law firm administration and on the creation and review of legal documents. Automated workflows can be used to streamline legal processes, such as litigation support, e-discovery, and case management. Automation can also assist in organizing and tracking progress and regulatory changes, data collection, reporting, and communication. An increase in automation will help to improve the accuracy of legal processes, reducing the risk of errors, and increasing efficiency.

Artificial Intelligence

Artificial Intelligence is becoming ubiquitous. In many aspects of our lives, there now are AI solutions available that make life easier. In the legal sector, too, AI is starting to make waves. In all the above-mentioned examples of automation, AI is playing a crucial role. As mentioned above, AI has already been successfully assisting lawyers with legal research, with process and workflow automation, with the generation of legal documents, as well as with e-discovery. But those are still fairly simple applications of AI. It can do far more. These days, AI is also being used to digest vast volumes of text and voice conversations, identify patterns, or carry out impressive feats of predictive modelling. The virtual legal assistants that we’ll discuss below, too, are all AI applications. If properly used, AI can save law firms much time and money. In 2023, we can expect to see a more widespread adoption of AI in the legal sector. (More on Artificial Intelligence and the Law).

Cloud-Native Solutions

Cloud computing has been a game-changer for many industries. Previous reports had already revealed that lawyers, too, are more and more relying on cloud solutions. This should not come as a surprise, as Cloud-based solutions provide many benefits, including reduced costs, increased scalability, and improved data security. They help lawyers and clients share files and data across disparate platforms rather than relying solely on emails. Additionally, cloud-based solutions are more accessible, allowing legal firms to work from anywhere and collaborate more effectively with clients and other stakeholders. In 2023, we can expect this trend to continue. (In the past, we have published articles on cloud solutions for lawyers, on managing your law firm in the cloud, an on lawyers in the cloud).

Virtual Legal Assistants (VLAs)

In the past, we have talked on several occasions about legal chatbots. Chatbots have sufficiently matured to now start playing the role of virtual legal assistants. VLAs are AI-powered chatbots that build on basic neural network computing models to harness the power of deep learning. They use artificial intelligence algorithms to assist law firms with various tasks. Gartner predicts VLAs can answer one-quarter of internal requests made to legal departments. They extend the operational capacity of law firms as well as of in-house corporate legal teams. As a result, they assist in reducing lawyers’ average response time and producing distinct service delivery efficiencies. Furthermore, as VLAs are a form of automation, all the benefits of automation apply here too: virtual legal assistants can help to improve the accuracy of legal work, reduce the risk of errors and increase efficiency. At present, virtual legal assistants are still primarily being used in uncomplicated and repetitive operations. Recent breakthroughs, however, indicate that they are already able to take on more complex tasks and will continue to do so.

Data Privacy and Cybersecurity

Ever since the GDPR, data privacy and cybersecurity have become increasingly important. In 2023, we can expect to see an ongoing emphasis on data privacy and as well as an increase in attention to cybersecurity in the legal sector. (The examples of high-profile Big Tech corporations receiving massive fines seem to be a good incentive). Law firms have understood that they too need to make sure that they have robust data privacy and cybersecurity measures in place to protect their clients’ confidential information. Several law firms also provide their clients assistance with the legal aspects of data protection.

Crypto technologies, Blockchain, and smart contracts

The market of cryptocurrencies was volatile in 2022. That did not stop an increase in interest in the underlying crypto technologies. Experts predict rises in a) regulation of cryptocurrencies and crypto technologies, in b) the adoption of cryptocurrency, c) a growing interest in decentralized finance (DeFi), and d) an increase in attempts at cryptocurrency taxation. We are already witnessing an intensification in litigation with regard to cryptocurrency and crypto technologies. This trend is expected to continue. Litigation about NFTs, e.g., is one of the areas where litigation is expected to rapidly increase.

Experts also expect an ongoing interest in and an increased adoption of Blockchain technology. Blockchain can be used to securely store and manage legal data, reducing the risk of data breaches and ensuring the integrity of legal records. Additionally, blockchain can be used to automate many legal processes, such as contract management and dispute resolution, by enabling the creation of smart contracts. As we mentioned in previous articles, smart contracts can streamline many legal processes, reducing the time and cost associated with contract management and dispute resolution. They can also help to increase the transparency and accountability of legal transactions, reducing the risk of fraud and improving the overall efficiency of legal processes.

Other Trends

The ABA survey report noticed that law firms are spending more money on legal technology than ever before. In many cases, this involved investing more in tightening cybersecurity.

The trend to work remotely and to use video conferencing for virtual meetings that started during the pandemic is ongoing.

More than ever before lawyers pay attention to their own work experience, as well as to the user experience for their clients by making their law firms more client centred. There is an ongoing focus on work-life balance, not only for the lawyers but also for the employees of law firms. Law firms are finally starting to consider things like employee satisfaction.

While billable hours remain the most used fee model, there has been a noticeable increase in lawyers using a subscription fee model.

Finally, the trend that law firms are increasingly hiring people with hybrid profiles is continuing. By increasing cognitive diversity, law firms want to close the gap between professionals with knowledge of legal matters and those with enough legal tech expertise to manage the digitization and automation of workflows. Gartner predicts that by the end of 2023, one third of corporate legal departments will have a legal tech expert in charge of managing the digital transformation and automation of internal processes. Large law firms are also increasingly hiring lawyers that are familiar with business administration.

 

Sources:

ChatGPT for Lawyers

In this article we will first talk about recent evolutions in the field of generative Artificial Intelligence (AI) in general, and about a new generation of chat bots. Then we focus on one particular one that is getting a lot of attention, i.e., ChatGPT. What is ChatGPT? What can it do, and what are the limits? Finally, we look at the relevance of ChatGPT for lawyers.

Introduction

We are witnessing the emergence of a new generation of chat bots that are more powerful than ever before. (We discussed legal chat bots before, here and here). Several of them excel in conversation. Some of these conversationalist chat bots recently made headlines on several occasions. In a first example, in December 2022, the DoNotPay chat bot renegotiated a customer’s contract with Comcast’s chat bot and managed to save 120 USD per year. (You read that correctly, two bots effectively renegotiating a contract). Shortly afterwards, a computer using a cloned voice of a customer was connected to the DoNotPay chat bot. A call was made to the support desk of a company and the speaking chat bot negotiated successfully with a live person for a reduction of a commercial penalty. The search engine You.com has added a conversation chat bot that allows people to ask a question and the reply is presented in a conversational format rather than a list of links. Microsoft has announced that its Bing search engine will start offering a conversational interface as well.

Conversationalist chat bots are a form of generative AI. Generative AI has made tremendous progress in other fields like the creation of digital artwork, or in filters and effects for all kinds of digital media, or in the generation of documents. These can be any documents: legal documents, blog or magazine articles, papers, programming code… Only days ago, the C-NET technology website revealed that they had started publishing articles since November 2022 that were entirely written by generative AI. Over a period of two months, they published 74 articles that were written by a bot, and the readers did not notice.

One chat bot in particular has been in the news on a nearly daily basis since it was launched in November 2022. Its name is ChatGPT and the underlying technology has also been used in some of the examples mentioned above.

What is ChatGPT?

ChatGPT stands for Chat Generative Pre-trained Transformer. The Wikipedia describes it as “a chatbot launched by OpenAI in November 2022. It is built on top of OpenAI’s GPT-3 family of large language models and is fine-tuned (an approach to transfer learning) with both supervised and reinforcement learning techniques. ChatGPT was launched as a prototype on November 30, 2022, and quickly garnered attention for its detailed responses and articulate answers across many domains of knowledge.”

In other words, it’s a very advanced chat bot that can carry a conversation. It remembers previous questions you asked and the answers it gave. Because it was trained on a large-scale database of texts, retrieved from the Internet, it can converse on a wide variety of topics. And because it was trained on natural language models, it is quite articulate.

What can it do and what are the limits?

Its primary use probably is as a knowledge search engine. You can ask a question just like you ask a question in any search engine. But the feedback it gives does not consist of a series of links. Instead, it consults what it has scanned beforehand and provides you with a summary text containing the reply to the question you asked.

But it doesn’t stop there, as the examples we have already mentioned illustrate. You can ask it to write a paper or an article on a chosen topic. You can determine the tone and style of the output. Lecturers have used it to prepare lectures. Many users asked it to write poetry on topics of their choice. They could even ask it to write sonnets or limericks, and it obliged. And most of the time, with impressive results. It succeeds wonderfully well in carrying a philosophical discussion. Programmers have asked it to write program code, etc. It does a great job of describing existing artwork. In short, if the desired output is text-based, chances are ChatGPT can deliver. As one reporter remarked, the possibilities are endless.

There are of course limitations. If the data sets it learned from contained errors, false information, or biases, the system will inherit those. A reporter who asked ChatGPT to write a product review commented on how the writing style and the structure of the article were very professional, but that the content was largely wrong. Many of the specifications it gave were from the predecessor of the product it was asked to review. In other words, a review by a person who has the required knowledge is still needed.

Sometimes, it does not understand the question, and it needs to be rephrased. On the other hand, sometimes the answers are excessively verbose with little valuable content. (I guess that dataset contained speeches by politicians). There still are plenty of topics that it has no reliable knowledge of. When you ask it if it can give you some legal advice, it will tell you it is not qualified to do so. (But if you rephrase the question, you may get an answer anyway, and it may or may not be accurate). Some of the programming code appeared to be copied from sites used by developers, which would constitute a copyright infringement. And much of the suggested programming code turned out to be insufficiently secure. For those reasons, several sites like StackOverflow are banning replies that are generated by ChatGPT.

Several other concerns were also voiced. As the example of CNET shows, these new generative AI bots have the potential of eliminating the need for a human writer. ChatGPT can also write an entire full essay within seconds, making it easier for students to cheat or to avoid learning how to write properly. Another concern is the possible spread of misinformation. If you know enough sources of the dataset that the chatbot learns from, you could deliberately flood it with false information.

What is the Relevance of ChatGPT for Lawyers?

Lawyers have been using generative AI for a while. It has proven to be successful in drafting and reviewing contracts and other legal documents. Bots like DoNotPay, Lawdroid, and HelloDivorce are successfully assisting in legal matters on a daily basis. For these existing legal bots, ChatGPT can provide a user-friendly conversationalist interface that make them easier to use.

When it comes to ChatGPT itself, several lawyers have reported on their experiences and tests with the system. It turned out that it could mimic the work of lawyers with varying degrees of success. For some items, it did a great job. It, e.g., successfully wrote a draft renting agreement. And it did a good job at comparing different versions of a legal document and highlighting what the differences were. But in other tests, the information it provided was inaccurate or plain wrong, where it, e.g., confused different concepts.

And the concerns that apply to generative AI in general, also apply to ChatGPT. These include concerns about bias and discrimination, privacy and compliance with existing privacy and data protection regulation like the GDPR, fake news and misleading content. For ChatGPT, the issue of intellectual property rights was raised as well. The organization behind ChatGPT claims it never copies texts verbatim, but tests with programming code appear to show differently. (You can’t really paraphrase programming code).

Given the success and interest in ChatGPT, the usual question was raised whether AI will replace the need for lawyers. And the answer stays the same that, no, it won’t. At present, the results are often very impressive, but they are not reliable enough. Still, the progress that has been made shows that it will get better and better at performing some of the tasks that lawyers do. It is good at gathering information, at summarizing it and at comparing texts. And only days ago (13 January 2023) the American Bar Association announced that ChatGPT had successfully passed one of its bar exams on evidence. But lawyers are still needed when it comes to critical thinking or the elaborate application of legal principles.

Conclusion

A new generation of chat bots is showing us the future. Even though tremendous progress has been made, there are still many scenarios where they’re not perfect. Still, they are improving every single day. And while at present supervision is still needed to check the results, they can offer valuable assistance. As one lecturer put it, instead of spending a whole day preparing a lecture, he lets ChatGPT do the preparation for him and write a first draft. He then only needs one hour to review and correct it.

For lawyers, too, the same applies. The legal texts it generates can be a hit and miss, and supervision is needed. You could think of the current status where the chat bot is like a first- or second-year law student doing an internship. They can save you time, but you have to review what they’re doing and correct where necessary. Tom Martin from Lawdroid puts it as follows: “If lawyers frame Generative AI as a push button solution, then it will likely be deemed a failure because some shortcoming can be found with the output from someone’s point of view. On the other hand, if success is defined as productive collaboration, then expectations may be better aligned with Generative AI’s strengths.”

 

Sources:

 

Artificial Intelligence Regulation

In previous articles, we have discussed how artificial intelligence is biased, and on how this problem of biased artificial intelligence persists. As artificial intelligence (AI) is becoming ever more prevalent, this poses many ethical problems. The question was raised whether the industry could be trusted to self-regulate or whether legal frameworks would be necessary. In this article, we explore current initiatives for Artificial Intelligence regulation. We look at initiatives within the industry to regulate artificial intelligence as well as at attempts to create legal frameworks for Artificial Intelligence. But first we investigate why regulation is necessary.

Why is Artificial Intelligence Regulation necessary?

Last year, the Council of Europe published a paper where it concluded that a legal framework was needed because there were substantive and procedural gaps. UNESCO, too, identified key issues in its Recommendation on Ethics in Artificial Intelligence. Similarly, in its White Paper on Trustworthy AI, The Mozilla Foundation identifies a series of key challenges that need to be addressed and that makes regulation desirable. These are:

  • Monopoly and centralization: Large-scale AI requires a lot of resources and at present only a handful of tech giants have those. This has a stifling effect on innovation and competition.
  • Data privacy and governance:  Developing complex AI systems necessitates vast amounts of data. Many AI systems that are currently being developed by large tech companies harvest people’s personal data through invasive techniques, and often without their knowledge or explicit consent.
  • Bias and discrimination: As was discussed in previous articles, AI relies on computational models, data, and frameworks that reflect existing biases. This in turn results in biased or discriminatory outcomes.
  • Accountability and transparency: Many AI systems just present an outcome without being able to explain how that result was reached. This can be the product of the algorithms and machine learning techniques that are being used, or it may be by design to maintain corporate secrecy. Transparency is needed for accountability and to allow third-party validation.
  • Industry norms: Tech companies tend to build and deploy tech rapidly. As a result, many AI systems are embedded with values and assumptions that are not questioned in the development cycle.
  • Exploitation of workers: Research shows that tech workers who perform the invisible maintenance of AI are vulnerable to exploitation and overwork.
  • Exploitation of the environment: The amount of energy needed for AI data mining makes it very environment unfriendly. The development of large AI systems intensifies energy consumption and speeds up the extraction of natural resources.
  • Safety and security: Cybercriminals have embraced AI. They are able to carry out increasingly sophisticated attacks by exploiting AI systems.

For all these reasons, the regulation of AI is necessary. Many large tech companies still promote the idea that the industry should be allowed to regulate itself. Many countries, as well as the EU, on the other hand believe the time is ripe for governments to impose a legal framework to regulate AI.

Initiatives within the industry to regulate Artificial Intelligence

Firefox and the Mozilla Foundation

The Mozilla Foundation is one of the leaders in the field when it comes to promoting trustworthy AI. They already have launched several initiatives, including advocacy campaigns, responsible computer science challenges, research, funds, and fellowships. The Foundation also points out that “developing a trustworthy AI ecosystem will require a major shift in the norms that underpin our current computing environment and society. The changes we want to see are ambitious, but they are possible.” They are convinced that the “best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments, and organizations around the world.”

IBM

IBM, too, promotes an ethical and trustworthy AI, and has created its own ethics board. It believes AI should be built on the following principles:

  • The purpose of AI is to augment human intelligence
  • Data and insights belong to their creator
  • Technology must be transparent and explainable

To that end, it identified five pillars:

  • Explainability: Good design does not sacrifice transparency in creating a seamless experience.
  • Fairness: Properly calibrated, AI can assist humans in making fairer choices.
  • Robustness: As systems are employed to make crucial decisions, AI must be secure and robust.
  • Transparency: Transparency reinforces trust, and the best way to promote transparency is through disclosure.
  • Privacy: AI systems must prioritize and safeguard consumers’ privacy and data rights.

Google

Google says it “aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good.

  1. Be socially beneficial
  2. Avoid creating or reinforcing unfair bias
  3. Be built and tested for safety
  4. Be accountable to people
  5. Incorporate privacy design principles
  6. Uphold high standards of scientific excellence
  7. Be made available for uses that accord with these principles.”

It also made it clear that it “will not design or deploy AI in the following application areas:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.”

It adds that that list may evolve.

Still, Google seems to have a troubled relationship with ethical AI. It notoriously fired its entire ethics board in 2019, to replace it with a team of ethical AI researchers. When subsequently, on separate occasions, two of those were fired too, it again made headlines.

Facebook / Meta

Whereas others talk about trustworthy and ethical Ai, Meta (the parent company of Facebook) on the other hand has different priorities and talks about responsible AI. It, too, identifies five (or ten) pillars:

  1. Privacy & Security
  2. Fairness & Inclusion
  3. Robustness & Safety
  4. Transparency & Control
  5. Accountability & Governance

Legal frameworks for Artificial Intelligence

Apart from those initiatives within the industry, there are proposals for legal frameworks as well. Best known is the EU AI Act. Others are following suit.

The EU AI Act

The EU describes its AI act as “a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”

The text can be misleading as, effectively, the proposal distinguishes not three but four levels of risk for AI applications: 1) unacceptable risk, which are banned, 2) high-risk, which must be regulated with specific legal requirements, 3) low risk, where most of the time no regulation will be necessary, and 4) no risk, which do not have to be regulated at all.

By including an ‘unacceptable risk‘ category, the proposal introduces the idea that certain types of AI applications should be forbidden because they violate basic human rights. All applications that manipulate human behaviour to deprive users of their free will, as well as systems that allow social scoring fall in this category. Exceptions are allowed for military purposes and law enforcement purposes.

High risk systems “include biometric identification, management of critical infrastructure (water, energy etc), AI systems intended for assignment in educational institutions or for human resources management, and AI applications for access to essential services (bank credits, public services, social benefits, justice, etc.), use for police missions as well as migration management and border control.” Again, there are exceptions, many of which have to do with cases where biometric identification is allowed. These include, e.g., missing children, suspects of terrorism, trafficking, and child pornography. The EU wants to create a database that keeps track of all high-risk applications.

Limited risk or low risk applications includes various bots which companies use to interact with their customers. The idea here is that transparency is required. Users must know, e.g., that they are interacting with a chat bot and to what information the chat bot has access.

All AI systems that do not pose any risk to citizen’s rights are considered no risk applications for which no regulation is necessary. These applications include games, spam filters, etc.

Who does the EU AI Act apply to? As is the case with the GDPR, the EU AI Act does not apply exclusively to EU-based organizations and citizens. It also applies to anybody outside of the EU who is offering an AI application (product or service) within the EU, or if an AI system uses information about EU citizens or organizations. Furthermore, it also applies to systems outside of the EU that use results that are generated by AI systems within the EU.

A work in progress: the EU AI Act is still very much a work in progress. The Commission made its proposal and now the legislators can give feedback. At present, more than a thousand amendments have been submitted. Some factions think the framework goes too far, while others claim it does not go far enough. Much of the discussions deal with both defining and categorizing AI systems.

Other noteworthy initiatives

Apart from the European AI Act, there are some other noteworthy initiatives.

Council of Europe: The Council of Europe (responsible for the European Convention on Human Rights) created its own Ad Hoc Committee on Artificial Intelligence. This Ad Hoc Committee published a paper in 2021, called A Legal Framework for AI Systems. The paper was a feasibility study explored the reasons as to why a legal framework on the development, design, and application of AI, based on Council of Europe’s standards on human rights, democracy and the rule of law is needed. It identified several substantive and procedural gaps and concluded that a comprehensive legal framework is needed, combining both binding and non-binding instruments.

UNESCO published a series of Recommendations on Ethics of Artificial Intelligence, which were endorsed by 193 countries in November 2021.

US: On 4 October, the White House released the Blueprint for an AI Bill of Rights to set up a framework that can protect people from the negative effects of AI.

No government initiatives exist yet in the UK. But Cambridge University, on 16 September 2022, published a paper on A Legal Framework for Artificial Intelligence Fairness Reporting.

 

Sources:

The legal technology renaissance

In this article, we discuss the legal technology renaissance that is occurring. We look at the recent legal technology boom, at some examples, and at the benefits. We observe what is driving this renaissance and what obstacles it has to overcome. We look at the consequences for the legal market, and at how to make it work for you.

The Recent Legal Technology Boom

In recent years, we have experienced a veritable legal technology renaissance, or legal tech boom, as some call it. A multitude of factors contributed to this. We have more computing power than ever before, with cloud computers doing the heavy lifting. We have made significant breakthroughs in artificial intelligence. The legal market has been changing dramatically and the legal technology market has followed suit. Finally, the pandemic too has been a catalyst for change. Law firms were forced to reorganize the way they work and invest in technology to be able to do so. In the process, many law firms took this as an opportunity to also invest in technology to improve their service delivery model. As of June and July 2021, we witnessed the first legal technology IPOs.

This boom is expected to continue in the next few years. In fact, some say this renaissance is only starting, as the demand for legal expertise is exploding. Gartner, e.g., made the following predictions:

  1. By 2025, legal departments will increase their spend on legal technology threefold.
  2. By 2024, legal departments will replace 20% of generalist lawyers with non-lawyer staff.
  3. By 2024, legal departments will have automated 50% of legal work related to major corporate transactions.
  4. By 2025, corporate legal departments will capture only 30% of the potential benefit of their contract life cycle management investments.
  5. By 2025, at least 25% of spending on corporate legal applications will go to non-specialist technology providers.

Big law firms and legal departments have taken the lead in this legal technology renaissance. By now, mid-size law firms and small law firms are catching up and starting to reap the benefits as well.

Some examples

Let us have a look at some examples where legal technology has changed the ways law firms and legal departments operate. A first area has to do with streamlining the administrative operations of the law firm or legal department. Examples here include document automation, E-Billing, and E-Filing. A second area has to do with streamlining casework, where progress has been made with eDiscovery software and with case management software. In both areas, far more aspects of the overall process have been automated than ever before. A third area has to do with collaboration and exchange of ideas. We are seeing a steady rise in online collaboration tools, in the use of AI-enabled chatbots and virtual legal assistants, in online education, and in video conferencing, where the pandemic resulted in a sharp increase in the available tools. Finally, major progress has also been made in the availability and usage of different kinds of analytics. These provide us with new insights in how law firms and legal departments can be run more efficiently. They also offer new insights into patterns when conducting legal research. Predictive analytics, e.g., allow to predict the chances of success in specific cases.

Benefits

The benefits of this legal technology renaissance are threefold. A first benefit is greater efficiency and better service delivery. Automation reduces errors, speeds up and improves the quality of legal service delivery. It also allows for greater scalability. A second benefit is the greater insight we gain. These are the result of Machine Learning and analytics, but also of analyzing our workflows for automation so they can be optimized. Finally, the boom in legal technology is helping to bridge the Access to Justice gap.

What drives this legal technology renaissance?

There are three key concepts that are central to this legal technology renaissance. First, it Is about automation. The technological progress that has been made allows to automate far more of the legal service delivery process. The mantra has become to automate where possible to increase productivity and efficiency. If law firms want to remain competitive, automation is inevitable.

A second aspect of this boom has to do with Legal Digital Transformation. The Global Tech Council describes it as the digitizing all areas of legal expertise, including service delivery, workflow, procedures, team communication, and client interaction in the legal sector. The Internet has changed the way we live, where we spend part of our lives online, in a digital world. With some delay, the legal sector is becoming part of that digital world, too.

Finally, the legal technology renaissance is about a new legal services delivery model that is more efficiency-driven, more client-centred, and provides all stakeholders with more insight.

Issues / Obstacles

Not everybody is reaping the benefits yet of these technological breakthroughs. Lawyers are traditionally rather conservative when it comes to their adoption of new technologies. Richard Tromans points to two main issues that are obstacles to greater adoption.

A first issue is “the belief that any of the above applications that relate to automation and improved workflows are somehow an answer in and of themselves, rather than part of a much larger integrated approach to legal services delivery.”

The other challenge stems from the fact that these technologies change the way law firms operate. It isn’t as simple as plug and play. The technologies may not meet over-elevated expectations. And the implementation of new technologies needs to be part of a bigger strategy around service delivery. In essence, these changes need an engagement from not only the IT team, but from the lawyers as well, who will need a hybrid mix of skills. Tromans warns that this can lead to disillusionment and people backing away.

Consequences on the legal market

This legal technology boom is changing the legal market. We already pointed out that it changes the way law firms and legal departments operate. As mentioned above, this technology boom is introducing new legal services delivery models that focus on being more client-centred, on increased efficiency, and increased insight.

As second consequence is the introduction of new players on the legal market. There are plenty of alternative legal service providers. Some of these offer services to legal consumers. These include, e.g., legal chatbots like DoNotPay or DivorceBot. Most of them, however, offer specialized services for law firms and legal departments. These include services like eDiscovery, document automation and review, legal research assistance, analytics, etc.

A third change has to do with the hybrid skill set that is needed in this changed service delivery model. More and more bar associations are opening up to changes in the corporate structure of firms offering legal services. Law firms are allowed to have shareholders, co-owners, and directors that are not lawyers. At the same time, corporate entities are being allowed to offer certain legal services. Some bars are even considering giving accreditation to some alternative legal service providers.

How to make the legal technology renaissance work for you

Making the legal technology renaissance work for you is not a guaranteed immediate success story. Here are some considerations that may be useful.

There are four key elements to planning your digital transformation process. The first two are the selection of 1) the best legal platform, 2) and the best IT infrastructure for that platform. This includes deciding whether to host on-site or in the cloud or opting for a hybrid solution. 3) Understand that optimizing workflows involves legal engineering. And 4) If you are going to use AI-powered solutions, you will need Machine Learning support.

When choosing your best legal platform, consider that the 2021 ABA Legal Tech Survey support observed that as a rule, most solutions work out-of-the-box, and that no customization is required. Experience has also shown that directly using the solution out-of-the-box allows to reap more benefits and faster.

Experience also demonstrated that an incremental implementation strategy tends to be more successful than a once-off big-bang transformation. Such a staged approach leads to success faster and more consistently.

Digital transformation projects tend to be more successful if the firm has some product champions, i.e., users who commit to familiarizing themselves with the solutions first. They can then assist others, show them how to reap the benefits of the new technologies, and convince others to start using them, too.

While implementing a digital transformation process, focus on business outcomes rather than on features. And set realistic ROI benchmarks.

Conclusion

The legal technology boom is disrupting the legal market for the better. As implementing these new technologies changes the way we work, some growing pains are to be expected. A balanced and staged implementation approach offers the biggest chances of success. To remain competitive in a changing market, law firms and legal departments have no choice but to adapt. Some fear that all these changes will make law firms obsolete. The experts don’t agree. Tromans points out that, while technology is very important in moving today’s legal sectors forward, there will undoubtedly always be a need for a human presence and personal connection with clients.

 

Sources:

Artificial Intelligence, Ethics, and Law

Every day, artificial intelligence (AI) is becoming more entrenched in our lives. Even cheap smart phones have cameras that use AI to optimize the pictures we are taking. Try getting online assistance for a problem you are facing, and you are likely to first be met by a chatbot rather than a person. We have self-driving cars, trucks, buses, taxis, trains, etc. AI can be a force for good, but it can also be a force for bad. Cybercriminals are using AI to steal identities and corporate secrets, to gain illegal access to systems, transfer funds, avoid police detection, etc. AI is being weaponized and militarized. This raises ethical concerns, and the possibility that legal frameworks will have to be implemented to address those concerns.

Let us first touch upon some of the ethical problems we are already being confronted with. The use of facial recognition software that is being implemented in airports and big cities raises both privacy and security concerns. The same concerns pertain to using big data for Machine Learning. In previous articles, we already paid attention to the problem of bias in AI, where the AI algorithms inherit our biases because they are reflected in the data sets they use. One of the areas where the ethical issues of AI really come to the forefront is with self-driving vehicles. Let us explore that example more in depth.

Sometimes, traffic accidents cannot be avoided, and those may lead to fatalities. Imagine the brakes of your car stop functioning, while you are driving down a street. Ahead of you are some children getting out of car that is standing still, in the lane for oncoming traffic a truck is coming, and on the far side of the road some people are on the pavement talking. What do you do? And what is a self-driving car supposed to do? With self-driving cars, the car maker may have to make the decision for you.

In ethics, this problem is usually referred to as the Trolley Problem. A runaway trolley is racing down a railroad track, and you are standing at a switch that can change the track it is on. If you do not do a thing, five people will be killed. If you switch the lever, one person will be killed. What is the right thing to do?

The Moral Machine experiment is the name of an online project where different variations of the Trolley Problem were presented to people from all over the world. It asked questions to determine whether saving humans should be prioritized over animals (including pets), passengers over pedestrians, more lives over fewer, men over women, young over old, etc. It even asked whether healthy and fit people should be prioritized over sick ones, people with a high social status over people with a low social status, or law-abiding citizens over ones with criminal records. Rather than posing the questions directly the survey typically would present people with combined options: kill three elderly pedestrians or three youthful passengers?

Overall, the experiment gathered 40 million decisions in 10 languages from millions of people in 233 countries and territories. Surprisingly, the results tended to vary greatly from country to country, from culture to culture and along lines of economics. “For example, participants from collectivist cultures like China and Japan are less likely to spare the young over the old—perhaps, the researchers hypothesized, because of a greater emphasis on respecting the elderly. Similarly, participants from poorer countries with weaker institutions are more tolerant of jaywalkers versus pedestrians who cross legally. And participants from countries with a high level of economic inequality show greater gaps between the treatment of individuals with high and low social status.” (Karen Hao, in Technology Review)

In general, people agreed across the world that sparing the lives of humans over animals should take priority, and that many people should be saved rather than few. In most countries, people also thought the young should be preserved over the elderly, but as mentioned above, that was not the case in the Far East.

Now, this of course raises some serious questions. Who is going to make those decisions and what will they be choosing, considering these different choices people suggested? Are we going to have different priorities depending on whether we are using e.g. Japanese or German self-driving cars? Or will the car makers have the car make different choices based on where the car is driving? And what if more lives can be spared if we sacrifice the driver?

When it comes to sacrificing the driver, one car manufacturer, Mercedes, has already made clear that will never be an option. The justification they give, is that self-driving cars will lead to far fewer accidents and fatalities, and that those occasions where pedestrians are sacrificed for drivers will be cases of acceptable collateral damage. But is that the right choice, and is it really up to the car maker to make that choice?

An ethicist identified four chief concerns that must be addressed when we look for solutions with regard to ethical AI:

  1. Whose moral standards should be used?
  2. Can machines converse about moral issues? (What if e.g. multiple self-driving vehicles are involved? Will they communicate with each other to choose the best scenario?)
  3. Can algorithms take context into account?
  4. Who should be accountable?

Based on these considerations, some principles can be established to regulate the use of AI. In a previous article we already mentioned the principles the EU and OECD suggest. In 2018, the World Economic Forum also had already suggested 5 core principles to keep AI ethical:

  • AI must be a force for good and diversity
  • Intelligibility and fairness
  • Data protection
  • Flourishing alongside AI
  • Confronting the power to destroy

An initiative that involves several tech companies also identified seven critical points:

  1. Invite ethics experts that reflect the diversity of the world
  2. Include people who might be negatively impacted by AI
  3. Get board-level involvement
  4. Recruit an employee representative
  5. Select an external leader
  6. Schedule enough time to meet and deliberate
  7. Commit to transparency

A deeper question, however, is whether the regulation of AI should really be left to the industry? Shouldn’t these decisions rather be made by governments? The people behind the Moral Machine experiment think so, as do many scientists and experts in ethics. Thus far, however, not much has been done when it comes to legal solutions. At present, there are no legal frameworks in place. The best we have is for members of the EU and the OECD who have put some guidelines in place, but those are merely guidelines that are not enforceable. And that is not enough. A watchdog organization in the UK warned that AI is progressing so fast that we already are having difficulties catching up. We cannot afford postponing addressing these issues any longer.

Sources:

A Chatbot For Your Law Firm

We have talked about chatbots on several occasions in the past. Most of those targeted legal consumers. Today, we’ll have a look at how a chatbot could benefit your law firm.

Let’s start by defining what a chatbot is. A chatbot is a computer program designed to mimic human conversation. It is typically powered by rules or by more advanced Artificial Intelligence technologies like Machine Learning. Most chatbots are text-based, but more advanced ones like Siri or Alexa, are voice-based. In law firms, they are often used for simple tasks like increasing lead generation, client intake, booking an appointment or accepting payments. More advanced legal chatbots can generate, review and analyse legal documents.

You may think a chatbot is not for your law firm, but you’d be mistaken. There are many benefits, both for your clients and prospective clients, as well as for your law firm.

What makes chatbots attractive to the legal consumers?

  • First, there is the unprecedented popularity of messaging apps. One of the reasons chatbots can be found everywhere is because they became popular on messaging aps. The first chatbots appeared on Facebook Messenger and soon after were offered on other platforms like Skype, weChat, Telegram, Slack, Kik, Line, and SMS.
  • People love their mobile devices, and chatbots are typically designed for mobile first.
  • People love to text. Did you know text messages boast a 98% open rate? Chatbots benefit from this.
  • People love interaction, and chatbots are interactive. They increase engagement.
  • Chatbots are available 24/7.
  • Our online culture is an instant gratification culture. Chatbots can give instantaneous responses. Research shows that 70 % of consumers prefer a chatbot to interacting with a human being, if it means they’ll get an instantaneous response.
  • Chatbots can mimic lawyers for several tasks, which means the legal consumers who need those services can get their needs met faster, and typically at a lower or no cost.

What are the benefits for your law firm?

  • Because consumers love interaction, conversational marketing has become a key part of promotion for any business, including law firms.
  • Chatbots can perform repetitive tasks that lawyers do. They have proven useful in:
    • Client acquisition and intake, as well as lead generation.
    • Answering FAQs, so you don’t have to email back and forth answering questions you are frequently asked.
    • Document generation and review.
  • Using chatbots to take care of repetitive tasks therefore leaves you more time for more productive and profitable endeavours.

So how do you get started? Once you know what you want your chatbot to do, there are plenty of tools available. In his article, “5 Often-Overlooked Steps to Building a Useful Chatbot for Your Law Practice“, Tom Martin from Lawdroid explains the best way to proceed. He outlines 5 steps.

Step 1 is to determine what your chatbot’s purpose is. Do you, e.g., want to use it to allow new clients to enter their details into your system and book an appointment? Or do you want a more advanced bot who, e.g., can generate or analyse legal documents? Be as specific as possible.

Step 2 is to determine where your bot lives. Will you offer your chatbot on your website, or on Facebook, or Whatsapp, etc.?

In step 3, you choose your bot’s personality: its name, visual style, backstory, and the conversational tone. (People enjoy a bit of humour). Make sure you also tell people they are dealing with a chatbot.

Step 4 is to determine your chatbot’s conversation structure.  Martin breaks this down in six components. First, you need to do some preparation where you look at some essential questions like who your target audience is (e.g. existing or new clients), what they are trying to do, and what they need for that. Next, you can diagram your dialog tree, where you map how the conversation can unfold. Let your chatbot start the conversation with a greeting, and make sure that you manage the users’ expectations: explain what the bot can and cannot do.  Martin calls the next step the “Glide Path to Goal”: the conversation should lead the user to a goal, and to reach that goal as effectively as possible, open-ended questions should be avoided. So, it’s good to suggest possible answers the user can choose from. Once the conversation is ended, and the goal is achieved, it’s good to thank the user, and to provide him or her with a deliverable or a specific call to action. Last but not least, make sure you pay sufficient attention to error handling.

In the fifth and last step, you choose what tools you will use to build your bot. Martin’s article includes a checklist and a list of available platforms.

The checklist includes the following items:

  • Is creating the chatbot free or paid?
  • Is any coding required?
  • What are the publishing platforms for the chatbot?
  • Does it use or need Artificial intelligence?
  • How are the third-party integrations with apps like Gmail, MailChimp, Office 365, etc.
  • What are the supported languages?
  • What is the recommended use? (You don’t need a bot, e.g., that uses Machine Learning if you only want your new clients to fill out their details).

If you want to build your legal chatbot, the following platforms are currently available:

(In his article, Martin goes over the checklist items for each of these platforms).

Let’s leave it at that for now. We’ve only been able to scratch the surface of this topic. The articles listed below can help you further.

 

Sources:

 

Legal AI is still biased in 2019

In October 2017, we published an article on how legal Artificial Intelligence systems had turned out to be as biased as we are. One of the cases that had made headlines was the COMPAS system, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It turned out the system had a double racial bias, one in favour of white defendants, and one against black defendants.

To this day, the problems persist. By now, other cases have come to light. Similar to the problems with the COMPAS system, e.g., algorithms used in Kentucky for cash bail applications consistently preferred white defendants. The situation is similar in the UK, where a committee concluded that bias and inaccuracy render artificial intelligence (AI) algorithmic criminal justice tools unsuitable for assessing risk when making decisions on whether to imprison people or release them. Algorithmic bias was also discovered in systems to rank teachers, and for natural language processing. In the latter, there was a racial bias with regard to hate speech, as well as a gender bias in general.

To research and address the problems with Artificial Intelligence, the ‘AI Now Institute’ was created.  Bias is one of the four areas they specifically focus on. They found that bias may exist in all sorts of services and products. A key challenge we face in addressing the problems is that “crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias. Financial and technology companies use all sorts of mathematical models and aren’t transparent about how they operate.”

So, what is algorithmic bias? The Wikipedia defines it as “systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Bias can emerge due to many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm.”

The AI Now Institute clarifies that artificial intelligence systems learn from data sets, and that those data sets reflect the social, historical and political conditions in which they were created. As such, they reflect existing biases.

It may be useful to make a distinction between different types of algorithmic bias. Eight different types have been identified thus far:

  1. Sample bias is the most common form of bias. It is when the samples used for the data sets are themselves contaminated with existing biases. The examples given above are all cases of sample bias.
  2. Prejudice bias is one of the causes of sample bias. Prejudice occurs as a result of cultural stereotypes in the people involved in the process. A good example of this are the New York Police Department’s stop and frisk practices. In approximately 83 percent of the cases, the person who was stopped was either African American or Hispanic, where both groups combined only make up just over half of the population. An AI system that learns form a data set like that will inherit the human racial bias that thinks people are more likely suspicious if they’re African American or Hispanic. So, because of prejudice, factors like social class, race, nationality, religion, and gender can creep into the model, and completely skew the results.
  3. Confirmation bias is another possible cause for sample bias. Confirmation bias is the tendency to give preference to information that confirm one’s existing beliefs. If AI systems are used to confirm certain hypotheses, the people selecting the data may – even subconsciously – be inclined to select the data in function of the hypothesis they’re trying to prove.
  4. Group Attribution Bias is the type of bias where the data set contains an asymmetric view of a certain group. An example for that was Amazon’s AI assistant for the Human Resources department. Because Amazon had far more male engineers working for them than female engineers, the system concluded that male engineers had to be given preference over female engineers.
  5. The Square Peg Bias has to do with selecting a data set that is not representative and is chosen because it just happens to be available. It is also known as the availability bias.
  6. The Bias-variance Trade-off. This is a bias that is introduced to the system by mathematically over-correcting for variance. (An example to clarify: Say you have a data set where 30% of the people involved are female. Therefore, females are effectively underrepresented in your data set. To compensate you use mathematical formulas to ‘correct’ the results). This mathematical correction can introduce new biases, especially in more complex data sets, where the corrections could lead to missing certain complexities.
  7. Measurement Bias has to do with technical flaws that contaminate the data set. Say you want to weigh people and use scales, but they’re not measuring correctly.
  8. Stereotype Bias. The example given above with Amazon also qualifies as a gender stereotype bias. There are more male engineers than female engineers. That may lead systems to favour male engineers, and/or to preserve the ratio existing in the data set.

The good news is that as we are getting better at understanding and identifying the different types of algorithmic bias, we also are getting better at finding solutions to counteract them.

 

Sources:

 

Legal Bots in 2019

It has been two years since we published our article with an overview of legal bots. Since then, a lot has happened, and on several occasions legal bots made headlines: We have dedicated articles, e.g., to legal bots beating lawyers at specific tasks, and to the rise of robot clerks, prosecutors and judges. Overall, we have witnessed an unprecedented proliferation of digital assistants who are transforming public service and legal service delivery. We now have bots who offer services for legal consumers, as well as for the various legal professions: lawyers, prosecutors, judges, notaries, and paralegals.

By now, there are so many different legal bots that it is no longer possible to mention all of them within the scope of one blog article. In fact, it would probably be possible to dedicate entire articles to each individual bot. So, we will have a look at how the ones we discussed two years ago are doing, what new players have followed their examples, and at some of the more interesting recent arrivals on the scene.

Back in July 2017, DoNotPay already was the most impressive legal bot. What started as a simple bot to appeal traffic tickets, evolved into a system that assists legal consumers in the UK, the US, and Canada, on a wide range of topics, including seeking asylum, claiming damages from airlines, filing harassment claims at work, etc. Since then it has increased the services it offers, and now also assists, e.g., with divorces. More importantly, DoNotPay has become a platform that can assist you in creating your own legal bots. Early July 2019, Joshua Browder announced DoNotPay had raised 4.6 million USD in seed funding. So, we can expect it to continue being an important player in the market.

Lawdroid started off as an intelligent legal chatbot that assisted entrepreneurs in the US in incorporating their business. Soon after, Lawdroid became a platform to create bots, as it began to create legal chatbots on behalf of lawyers. Since then, it has further expanded its services, and, e.g., now also offers its own divorce bot, called Larissa.

The examples of DoNotPay and Lawdroid were followed by others who now, too, are offering a platform to create legal bots. Worth mentioning are Josef and Automio, and even Facebook. Any lawyer can create a legal chatbot on Facebook Messenger. Getting started is as easy as buying and customizing commercial templates that are available from as little as 50 USD.

Billybot was the first legal clerk that assisted people in finding a lawyer near them to assist them. Its example was widely followed. In a previous article, we mentioned Victor, the clerk the Flemish Order of Bar Associations has created.

In the last 2 years, Lawbot in the UK first changed its name to Elixirr and then to CaseCrunch. They expanded the range of bots they have been offering, as well as the countries in which those bots are available. They made headlines when their Case Cruncher Alpha competed with over 100 lawyers in predicting the outcomes of cases and won. Similarly, LawGeex was better at evaluating Non-Disclosure Agreements than its human counterparts. By now, there are more and more bots available that try and predict the outcomes of cases. One of them that focuses on issues relating to landlord-tenant disputes, e.g., is Procezeus.

Lawbot probably also was the first to offer a divorce bot. That example, too, got many followers. We already mentioned that both Lawdroid and DoNotPay now also offer divorce bots. Two other ones worth mentioning are the divorce bot on Reddit, and Hello Divorce by Erin Levine, which streamlines and automates the process of divorces in California to the point that in most cases no intervention from lawyers is needed.

Lawbot also offered a legal research assistant, called Denninx. By now, many legal research assistants are available. Best known are IBM’s Ross and Eve. Most legal publishers, too, are providing digital assistants to help with legal research.

Below follows a random selection of other bots that were discussed in the literature.

  • In the US, Coralie is a virtual assistant that helps survivors of military sexual trauma connect with services and resources. It has won the Tech for Justice hackathonduring the American Bar Association’s Techshow.
  • Docubot is a chatbot that can be integrated in lawyers’ websites to help consumers generate legal documents. It also assists the lawyers with client intake through their website.
  • Another bot using the name LawBot comes from the Indian company LawRato. It helps users get answers to legal questions and recommendations of a lawyer.
  • Legalibotin Spain helps users compose legal documents and contracts through Facebook Messenger.
  • In Australia, Leximade headlines. This bot can be used to generate free privacy policy documents or non-disclosure agreements. It asks questions and uses the responses to give general information and create a document with the relevant details.
  • Also in Australia, Speak with Scout is a chatbot that works through Facebook Messenger to provide legal guidance as well as references to a lawyer.
  • Still in Australia, Parker is a chatbot that uses natural language processing and IBM’s Watson platform to answer users’ questions about data breaches and privacy law.
  • In the UK, RentersUnionis a chatbot that provides legal advice on housing issues for residents of London. The bot analyses a user’s tenancy agreement and then helps generate letters or recommends appropriate action.
  • In the US, Visabot is a legal chatbot that can assist with multiple immigration issues.
  • Also in the US, and more specifically in Utah, Solosuit is a chatbot/expert system that handles debt law. It asks for all the relevant information it needs, and then fills out the appropriate legal document.

 

Also worth mentioning is that several bar associations are considering officially recognizing / approving certain bots that offer legal services. That way, legal consumers can have some reassurance that the advice they are getting is trustworthy.

 

Sources:

 

International Guidelines for Ethical AI

In the last two months, i.e. in April and May 2019, both the EU Commission and the OECD published guidelines for trustworthy and ethical Artificial Intelligence (AI). In both cases, these are only guidelines and, as such, are not legally binding. Both sets of guidelines were compiled by experts in the field. Let’s have a closer look.

“Why do we need guidelines for trustworthy, ethical AI?” you may ask. Over the last years, there have been multiple calls, from experts, researchers, lawmakers and the judiciary to develop some kind of legal framework or guidelines for ethical AI.  Several cases have been in the news where the ethics of AI systems came into question. One of the problem areas is bias with regard to gender or race, etc. There was, e.g., the case of COMPAS, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It turned out the system had a double racial bias, one in favour of white defendants, and one against black defendants. More recently, Amazon shelved its AI HR assistant because it systematically favoured male applicants. Another problem area is privacy, where there are concerns about deep learning / machine learning, and with technologies like, e.g., facial recognition.

In the case of the EU guidelines, another factor is at play as well. Both the US and China have a substantial lead over the EU when it comes to AI technologies. The EU saw its niche in trustworthy and ethical AI.

EU Guidelines

The EU guidelines were published by the EU Commission on 8 April 2019. (Before that, in December 2018, the European Parliament had already published a report in which it asked for a legal framework or guidelines for AI. The EU Parliament suggested AI systems should be broadly designed in accordance with The Three Laws of Robotics). The Commission stated that trustworthy AI should be:

  • lawful, i.e. respecting all applicable laws and regulations,
  • ethical, i.e. respecting ethical principles and values, and
  • robust, both from a technical perspective while taking into account its social environment.

To that end, the guidelines put forward a set of 7 key requirements:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall-back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should consider the environment, including other living beings, and their social and societal impact should be carefully considered.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

A pilot project will be launched later this year, involving the main stakeholders. It will review the proposal more thoroughly and provide feedback, upon which the guidelines can be finetuned. The EU also invites interested business to join the European AI Alliance.

OECD

The OECD consists of 36 members, approximately half of which are EU members. Non-EU members include the US, Japan, Australia, New Zealand, South-Korea, Mexico and others. On 22 May 2019, the OECD Member Countries adopted the OECD Council Recommendation on Artificial Intelligence. As is the case with the EU guidelines, these are recommendations that are not legally binding.

The OECD Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Consistent with these value-based principles, the OECD also provides five recommendations to governments:

  1. Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  2. Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  3. Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  4. Empower people with the skills for AI and support workers for a fair transition.
  5. Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

As you can see, many of the fundamental principles are similar in both sets of guidelines. And, as mentioned before, these EU and OECD guidelines are merely recommendations that are not legally binding. As far as the EU is concerned, at some point in the future, it may push through actual legislation that is based on these principles. The US has already announced it will adhere to the OECD recommendations.

 

Sources: