Tag Archives: artificial intelligence

International Guidelines for Ethical AI

In the last two months, i.e. in April and May 2019, both the EU Commission and the OECD published guidelines for trustworthy and ethical Artificial Intelligence (AI). In both cases, these are only guidelines and, as such, are not legally binding. Both sets of guidelines were compiled by experts in the field. Let’s have a closer look.

“Why do we need guidelines for trustworthy, ethical AI?” you may ask. Over the last years, there have been multiple calls, from experts, researchers, lawmakers and the judiciary to develop some kind of legal framework or guidelines for ethical AI.  Several cases have been in the news where the ethics of AI systems came into question. One of the problem areas is bias with regard to gender or race, etc. There was, e.g., the case of COMPAS, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It turned out the system had a double racial bias, one in favour of white defendants, and one against black defendants. More recently, Amazon shelved its AI HR assistant because it systematically favoured male applicants. Another problem area is privacy, where there are concerns about deep learning / machine learning, and with technologies like, e.g., facial recognition.

In the case of the EU guidelines, another factor is at play as well. Both the US and China have a substantial lead over the EU when it comes to AI technologies. The EU saw its niche in trustworthy and ethical AI.

EU Guidelines

The EU guidelines were published by the EU Commission on 8 April 2019. (Before that, in December 2018, the European Parliament had already published a report in which it asked for a legal framework or guidelines for AI. The EU Parliament suggested AI systems should be broadly designed in accordance with The Three Laws of Robotics). The Commission stated that trustworthy AI should be:

  • lawful, i.e. respecting all applicable laws and regulations,
  • ethical, i.e. respecting ethical principles and values, and
  • robust, both from a technical perspective while taking into account its social environment.

To that end, the guidelines put forward a set of 7 key requirements:

  • Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall-back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.
  • Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  • Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
  • Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should consider the environment, including other living beings, and their social and societal impact should be carefully considered.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

A pilot project will be launched later this year, involving the main stakeholders. It will review the proposal more thoroughly and provide feedback, upon which the guidelines can be finetuned. The EU also invites interested business to join the European AI Alliance.

OECD

The OECD consists of 36 members, approximately half of which are EU members. Non-EU members include the US, Japan, Australia, New Zealand, South-Korea, Mexico and others. On 22 May 2019, the OECD Member Countries adopted the OECD Council Recommendation on Artificial Intelligence. As is the case with the EU guidelines, these are recommendations that are not legally binding.

The OECD Recommendation identifies five complementary values-based principles for the responsible stewardship of trustworthy AI:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  5. Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Consistent with these value-based principles, the OECD also provides five recommendations to governments:

  1. Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  2. Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  3. Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  4. Empower people with the skills for AI and support workers for a fair transition.
  5. Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

As you can see, many of the fundamental principles are similar in both sets of guidelines. And, as mentioned before, these EU and OECD guidelines are merely recommendations that are not legally binding. As far as the EU is concerned, at some point in the future, it may push through actual legislation that is based on these principles. The US has already announced it will adhere to the OECD recommendations.

 

Sources:

A chatbot, a robot prosecutor and a robot judge

No, this is not the first line of a joke about three robots that walked into a bar. It refers to three items that were in the news recently. We already were familiar with chatbots and robot lawyers. Now the Order of Flemish Bar Associations have launched their own chatbot; San Francisco is running a pilot project with a robot district attorney; and Estonia plans a robot judge to handle small damages claims. Let’s have a closer look at each.

The chatbot of the ‘Orde van Vlaamse Balies’ (Order of Flemish Bar Associations)

On 10 April 2019, the ‘Orde van Vlaamse Balies’ announced the launch of its new chatbot, called Victor. The initiative was taken by some bar associations, and the chatbot is meant to facilitate access to legal assistance. It does this in two ways. On the one hand, like its British counterpart Billybot, Victor helps you find a lawyer. He asks some questions to determine what area of practice your legal issue relates to. He then suggests some nearby specialist lawyers, based on the topic and the region you live in.

But Victor does more than that. The chatbot can also check whether you are eligible for a pro bono lawyer or for other types of legal assistance like reduced fees. He will ask the relevant questions, and if you are eligible, he will let you know what documents are required. If you have further questions he can’t answer, Victor will give you the contact details of the bar association that can provide you with additional answers.

Victor can be found at www.advocaat.be, as well as on the sites of the bar associations that were involved in its development: www.baliewestvlaanderen.be, www.balieprovincieantwerpen.be, and www.balielimburg.be. Victor is only available in Dutch.

The Robot District Attorney in San Francisco

About a year ago, in May 2018, the office of the District Attorney in San Francisco decided to launch a pilot project to clear convictions using algorithmic justice. Let’s give some background information first. In November 2016, recreational use of marijuana was legalized in California. For decades before the legalization of marijuana, thousands of people had received convictions for marijuana use. And now that it had become legal, the idea was to clear those preexisting convictions, and to use an algorithm to determine which cases were eligible for record clearance. As such, the algorithm is a triage algorithm. Once it determines a case is eligible, it automatically fills out the required forms. The San Francisco District Attorney then files the motion with the court.

Since the pilot project started, it has reviewed 43 years of eligible convictions. This has led to 3 038 marijuana misdemeanors being dismissed and sealed, and to recalling and re-sentencing up to 4 940 other felony marijuana convictions.

Given the success of the project, the plan is now to expand it, to eventually clear around 250 000 convictions.

The Robot Judge in Estonia

Finally, inspired by the success of the DoNotPay chatbot that offers free legal assistance in 1 000 legal areas, the Estonian government decided some weeks ago to create its own robot judge. The robot judge is meant to adjudicate small claims disputes of less than €7 000. Officials hope that the system would help clear a backlog of cases for judges and court clerks. At present the project is still in the earliest stages, but a pilot project that deals with contract disputes is scheduled for launch later this year. Parties are expected to upload the relevant information and documents, which the system will then analyze and come to a verdict. Parties will be given the option to appeal to a human judge. AI systems have been used before to assist in the triage of cases and to assist judges in their decision-making process. An autonomous robot judge, however, is a first.

So, we now have online courts, robot lawyers, prosecutors and judges. The idea that we might one day have cases handled without intervention of human lawyers suddenly has become a lot more real.

 

Sources:

An introduction to smart contracts

In a previous article, we have written about Artificial Intelligence (AI) and contracts. AI is having an impact in three areas when it comes to contracts: 1. contract review, 2. contract management and automation, and 3. smart contracts. While smart contracts are automated contracts, what sets them apart from other automated contracts is the usage of Blockchain technology.

What are smart contracts? We’ll combine elements from the definitions Tech Republic and the Investopedia to explain: A smart contract is a software-based contract between a buyer and a seller. The software automates the business processes and the conditions of fulfilment contained within the contract. The code programmed into the contract actually makes the contract self-executing so that it takes action whenever a specific condition is triggered within the contract. The code and the agreements contained therein exist across a distributed, decentralized Blockchain network. Smart contracts permit trusted transactions and agreements to be carried out among disparate, anonymous parties without the need for a central authority, legal system, or external enforcement mechanism. They render transactions traceable, transparent, and irreversible. Because the smart contract is software capable of automating business processes and contract fulfilment automatically, it eliminates the need for managers and middlemen supervision.

Let’s give an example: A is a supplier of products for B. Every month, B places an order with A. It makes sense to automate this process. The smart contract is a piece of software that, e.g., would contain the code that says if an order is received by A from B, and B is not in arrears, then that order must be executed. Now, with smart contracts these transactions are typically registered in a distributed, decentralized Blockchain network of ledgers. In a previous article we explained that Blockchain is a technology that registers transactions in a ledger, where everybody in the network has a copy of that ledger. Transactions are secured by using a verification code that is calculated based on all previous transactions in the ledger. In essence, to forge a transaction, one would therefore have to forge all registrations of all transactions in all ledgers.

The benefits of smart contracts are clear: the whole process of transactions between parties can be automated, and by using Blockchain technology one has virtually irrefutable proof of the transactions. Add to that that programming code tends to be less ambiguous than the generic legalese of traditional contracts, so the chances of disputes about the interpretation of smart contracts are smaller.

The usage of smart contracts is expected to grow fast. A survey published in Forbes Magazine predicts that by 2022, 25% of all companies will be using them. Basically in any market where Blockchain technology is useful, one can expect smart contracts to be useful, too.  Smart contracts can also be the perfect complement to E.D.I. At present, smart contract applications are already being used in – or developed for – supply chains and logistics, in finance and securities, real estate, management and operations, healthcare, insurance, etc.

Still, one has to be aware of the limitations of smart contracts, as there are a number of legal issues to take into account. The name ‘smart contracts’ is misleading in that they aren’t really contracts but software. As such, there are legal concerns with regard to:

  • Offer and acceptance: is there even a binding contract, if there is no human interaction or supervision, and the transaction is completely executed automatically?
  • The evidentiary value: smart contracts are not written evidence of agreed rights and obligations because they encapsulate only a portion of any rights and obligations that is related to contractual performance
  • Jurisdiction: is the area of jurisdiction clearly defined in case of a conflict or dispute?
  • Dispute Resolution: are there any dispute resolution mechanisms in place?

When considering working with smart contracts, it is therefore a good idea to first come to a framework agreement in which these issues are addressed. And those will preferably still be written by lawyers.

 

Sources:

 

Machine Learning Applications for Lawyers

The first legal applications of Artificial Intelligence already appeared several decades ago, but they never really took off. That has changed over the last few years. A lot of the recent progress is thanks to advancements in Machine Learning (ML), Deep Learning (DL), and Legal Analytics (LA). As many lawyers are not familiar with these terms, we will first explain the concepts in this article. Then we will focus on some applications, and finish with some general considerations.

Let us start with the three terms Artificial Intelligence, Machine Learning and Deep Learning, and how they relate to each other. The first thing to know is that Artificial Intelligence is the broadest term. Machine Learning is a subset of Artificial Intelligence, and Deep Learning in turn is a subset of Machine Learning.

The Techopedia defines Artificial intelligence (AI) as “an area of computer science that emphasizes the creation of intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include: Speech recognition, Learning, Planning, Problem solving.” Examples of legal AI applications that are not based on machine learning include, e.g., expert systems, decision tables, certain types of process automation (that focus on repetitive tasks), as well as simple legal chatbots that also focus on one or more specific tasks, etc.

Machine Learning (ML) is one branch of AI. It based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. It is a method of data analysis that automates analytical model building. To this end, it uses statistical techniques that give computer systems the ability to “learn” (e.g., progressively improve performance on a specific task) from the data, without being explicitly programmed.

In an article on TechRepublic, Hope Reese explains that Deep Learning (DL) “uses some ML techniques to solve real-world problems by tapping into neural networks that simulate human decision-making. Deep learning can be expensive, and requires massive datasets to train itself on. That’s because there are a huge number of parameters that need to be understood by a learning algorithm, which can initially produce a lot of false-positives.”

The process of learning in both Machine Learning and Deep Learning can be supervised, semi-supervised or unsupervised.

When applied to legal data, Machine Learning is often referred to as Legal Analytics. It “is the application of data analysis methods and technologies within the field of law to improve efficiency, gain insight and realize greater value from available data.” (TechTarget)

Let us have a look at some of the applications of machine learning in the legal field. The applications that are available are not just for lawyers, but also, e.g., for courts and law enforcement.

In a previous article, we already mentioned Legal Research, eDiscovery and Triage Services. Legal databases are increasingly using AI to present you with the relevant laws, statutes, case law, etc. There are eDiscovery services for lawyers as well as for law enforcement that focus on finding relevant digital evidence. Both typically use triage services to rank the results in order of relevance.

Legal Analytics are also being used for due diligence (where the system creates and uses intelligent checklists),  and for document review, including contract review. In some cases, the system can even go a step further and assist with the writing of documents and contracts (Intelligent Document Assembly). Some more advanced examples of process automation, e.g. for divorce cases where the whole procedure is largely automated, also rely on ML algorithms.

One of the fields where legal analytics has been making headlines is predictive analysis: using statistical models, the system makes predictions. Predictive analysis is not just used by lawyers, but in the broader legal field: there also are for applications, e.g., for courts and for law enforcement. There are systems, e.g., for:

  • Crime prediction and prevention that predict future crime spots.
  • Pretrial Release and Parole, Crime Recidivism Prediction
  • Judicial analytics and litigation analytics predict the chances of success or what the anticipated outcome is in certain cases. These systems can e.g.  be as specific as to take previous rulings by the presiding judge into account.

ML is also successfully being used in crime detection. There are AI systems that monitor what cameras are registering, or that use a network of microphones to detect shots being fired. In the news recently was a story how facial recognition software was used to scan people attending a concert, which led to several arrests being made.

These are just some examples. An article that was recently published in Tech Emergence (“AI in Law and Legal Practice – A Comprehensive View of 35 Current Applications”) gives an overview of 35 applications.

So, a lot of progress has been made in recent years in the fields of legal analytics /  legal machine learning. Still, there are certain issues and limitations to take into account when it comes to the legal field. A first issue has to do with privacy and confidentiality. Law firms who want to use their client data may need consent by those clients, and will have to anonymize the data. They also have to remain GDPR compliant. A second issue has to do with bias: in a previous article we mentioned how these AI systems inherit our biases. A third issue has to do with transparency: most neural networks present a conclusion without explaining how it came to that conclusion. If used in criminal cases, this constitutes a violation of the rights of defence. In civil cases, too, judges have to explain their decisions, and merely referring to the decision an AI system made is not sufficient. Lastly, there also is a cognitive aspect to the work lawyers do, and at present the cognitive abilities of legal ML systems are (still) extremely limited. They do not, e.g., know how to appreciate or emulate common sense.

 

Sources:

 

Robot Law

A few months ago, in January 2018, the European Parliament’s Legal Affairs Committee approved a report that outlines a possible legal framework to regulate the interactions between a) humans, and b) robots and Artificial Intelligence systems. The report is quite revolutionary. It proposes, e.g., giving certain types of robots and AI systems personhood, as “electronic persons”: These electronic persons would have rights and obligations, and the report suggests that they should obey Isaac Asimov’s Laws of Robotics. The report also advises that the manufacturers of robots and AI systems should build in a ‘kill switch’ to be able to deactivate them. Another recommendation is that a European Agency for Robotics and AI be established that would be capable of responding to new opportunities and challenges arising from technological advancements in robotics.

The EU is not alone in its desire to regulate AI: similar (though less far reaching) reports were published in Japan and in the UK. These different initiatives are in effect the first attempts at creating Robot Law.

So, what is Robot Law? On the blog of the Michalsons Law Firm, Emma Smith describes Robot Law as covering “a whole variety of issues regarding robots including robotics, AI, driverless cars and drones. It also impacts on many human rights including dignity and privacy.” It deals with the rights and obligations of AI systems, manufacturers, consumers, and the public at large in its relationship to AI and how it is being developed and used. As such, it is different from, and far broader than Asimov’s Laws of Robotics which only apply to laws robots have to obey.

Why would we need Robot Law? For a number of reasons. AI has become an important contributing factor to the transformation of society, and that transformation is happening extremely fast. The AI Revolution is often compared to the Industrial Revolution, but that comparison is partly flawed, because of the speed, scale and pervasiveness of the AI Revolution. Some reports claim that the AI Revolution is happening up to 300 times faster than the Industrial Revolution. This partly has to do with the fact AI is already being used everywhere, and that pervasiveness is only expected to increase rapidly. Think, e.g., of the Internet of Things, where everything is connected to the Internet, and massive amounts of data are being mined.

The usage of AI already raises legal issues of control, privacy, and liability. Further down the line we will be confronted with issues of personhood and Laws of Robotics. But AI also has wide-reaching societal effects. Think, e.g., of the job market and the skill sets that are in demand. These will change dramatically. In the US alone, driverless cars and trucks, e.g., will see a minimum of 3 million drivers lose their jobs. So, yes, there is a need for Robot Law.

Separate from the question of whether we need Robot Law, is the question whether we already need legislation now, and/or how much should be regulated at this stage. When trying to answer that question, we are met with almost diametrically opposing views.

The Nay-sayers claim that it is still too soon to start thinking about Robot Law. The argument is that AI and Robotics are still in their infancy, and at this stage there is a need first to explore and develop it further. Not only are there still too many unanswered questions, but their view is that regulation at this stage could stifle the progress of AI. All we would have to do, is adapt existing laws. In that context, Roger Bickerstaff, e.g., speaks of:

  • Facilitative changes – these are changes to law that are needed to enable the use of AI.
  • Controlling changes – these are changes to law and new laws that may be needed to manage the introduction and scope of operation of robotics and artificial intelligence.
  • Speculative changes – these are the changes to the legal environment that may be needed as robotics and AI start to approach the same level of capacity and capability as human intelligence – what is often referred to as the singularity point.

Others, like the authors of the aforementioned reports, disagree. They argue that there already are issues of privacy, control, and liability. There also is the problem of transparency: how do Neural Networks come to their conclusions, e.g., when they recommend whether somebody is eligible for parole, or a loan, or when they assess risks, e.g., for insurances. How does one effectively appeal against such decisions if it’s not known how the AI system reaches its conclusions? Furthermore, the speed, scale and pervasiveness of the AI Revolution and its societal effects, demand a proactive approach. If we don’t act now, we will soon be faced with problems that we know will arise.

Finally, in his paper, Ryan Calo points out, maybe surprisingly, that there already is over half a century of case law with regard to robots. These cases deal with both robots as objects and robots as subjects. He rightfully points out that “robots tend to blur the lines between person and instrument”. A second, and more alarming insight of his study was “that judges may have a problematically narrow conception of what a robot is”. For that reason alone, it would already be worthwhile to start thinking about Robot Law.

 

Sources:

When robot lawyers and lawyers compete

‘Man versus Machine’ contests are always popular. We have already seen such contest where Artificial Intelligence (AI) systems defeat human experts at chess, Jeopardy, and Go. And in recent months, robot lawyers have been giving human lawyers a run for their money, too. In the headlines this week is a story of an Artificial Intelligence system outperforming human lawyers in reviewing non-disclosure agreements. Some months ago, in October 2017, there was a similar story about an AI system that was better at predicting the outcome of cases about Payment Protection Insurance than human lawyers were. Let’s start with the latter.

Case Cruncher Alpha

One of the first occasions where robot lawyers beat their human counterparts happened in October 2017, in the UK. The Artificial Intelligence system taking on the human lawyers was Case Cruncher Alpha, developed by Casecrunch. Casecrunch is the current name for the company that previously had developed LawBot, Lawbot-X, and Divorcebot. It was started by a group of Cambridge University Law students. (See our article on Legal Chatbots). Case Cruncher Alpha is designed to study existing legal cases in order to predict the outcome of similar cases.

In the contest, CaseCruncher Alpha competed with 112 lawyers. All of them were commercial lawyers from London. None of them were experts in the specific matter the cases dealt with.

The purpose of the contest was to predict the outcome (success or failure) of cases dealing with Payment Protection Insurance (PPI). The topic is well known in the UK, where several banks were ordered to pay damages to customers to whom they had sold insurance products that they didn’t require. The contestants were given real cases that had been handled by the UK Financial Ombudsman Service. The lawyers were permitted to use all available resources.

The result was a staggering defeat for the lawyers: they achieved an average accuracy score of 62%, while Case Cruncher Alpha scored 86.6 %. (No data were provided on how individual lawyers did, or on whether there were any that had scored better than the AI system).

Richard Tromans from artificiallawyer.com rightfully points out that evaluating the results of this contest is tricky. They do not mean that machines are generally better at predicting outcomes than lawyers. What they do show is that if the question is defined precisely enough, machines can compete with, and sometimes outperform human lawyers. At the same time, this experiment also suggests that there may be factors other than legal factors that contributed to the outcome of the cases. But what the contest undoubtedly made clear was that legal prediction systems can solve legal bottlenecks.

Ian Dodd was one of the judges. He believes AI may replace some of the grunt work done by junior lawyers and paralegals, but that no machine can talk to a client or argue in front of a High Court judge. In his view, “The knowledge jobs will go, the wisdom jobs will stay.”

LawGeex

Another occasion where robot lawyers got the upper hand was in a contest in February 2018, in the US. In this case, the Artificial Intelligence system was developed by LawGeex, the company that was started in 2014 by commercial lawyer Noory Bechor (cf. our previous article on AI and contracts). Noory Bechor had come to the realization that 80% of his work consisted of reviewing contracts and was highly repetitive. He believed it could be done faster, cheaper and more effectively by a computer, and that’s why he started LawGeex.

In this challenge, the AI system competed with 20 lawyers in reviewing Non-Disclosure Agreements. All 20 lawyers were experts in the field. The LawGeex report explains: “The study asked each lawyer to annotative five NDAs according to a set of Clause Definitions. Each lawyer was given four hours to find the relevant issues in all five NDAs.” In that time, they had to identify 30 legal issues, including arbitration, confidentiality of relationship, and indemnifications. They were given scores reflecting how accurately they identified each issue.

Once again, the AI system did better than the human lawyers. It achieved an accuracy rate of 94%, whereas the lawyers achieved an average of 85%. There were, however, considerable differences between how well the lawyers did. The lowest performing lawyer, e.g., only scored 67%, while the two best performing lawyers beat the AI system and achieved, depending on the source, either 94 or 95% accuracy. For one specific NDA, one lawyer only identified 55% of the relevant issues, while for another one the AI system reached a score of 100%, where the best human lawyers reached 97% for that one.

The human lawyers were no competition for the AI system when it came to speed. The AI system cleared the job in 26 seconds, while the lawyers took 92 minutes on average. The longest time one lawyer took, was 151 minutes, while the shortest time was 51 minutes.

Gillian K. Hadfield is a Professor of Law and Economics at the University of Southern California who advised on the test. She says, “This research shows technology can help solve two problems: both making contracts faster and more reliable, and freeing up resources so legal departments can focus on building the quality of their human legal teams.” In other words, the use of AI can actually help lawyers expedite their work, and free them up to focus on tasks that still require human skills.

 

Sources:

Robots, Liability and Personhood

There were some interesting stories in the news lately about Artificial Intelligence and robots. There was the story, e.g., of Sophia, the first robot to officially become a citizen of Saudi Arabia. There also was a bar exam in the States that included the question whether we are dealing with murder or product liability, if a robot kills somebody. Several articles were published on companies that build driverless cars and how they have to contemplate ethical and legal issues when deciding what to do when an accident is unavoidable. If a mother with a child on her arm, e.g., unexpectedly steps in front of the car, what does the car do? Does it hit the mother and child? Does it avoid the mother to hit the motorcycle in the middle lane that is waiting to turn? Or does it go into the lane of oncoming traffic, with the risk of killing the people in the driverless car? All these stories raise some thought-provoking legal issues. In this article we’ll have a cursory glance at personhood and liability with regard to Artificial Intelligence systems.

Let’s start with Sophia. She is a robot designed by Hong Kong-based AI robotics company Hanson Robotics. Sophia is programmed to learn and mimic human behaviour and emotions, and functions autonomously. When she has a conversation with someone, her answers are not pre-programmed. She made world headlines when in October 2017, she attended a UN meeting on artificial intelligence and sustainable development and had a brief conversation with UN Deputy Secretary-General Amina J. Mohammed.

Shortly after, when attending a similar event in Saudi Arabia, she was granted citizenship in the country.  This was rightfully labelled as a historical event, as it was the first time ever a robot or AI system was granted such a distinction. This instantly raises many legal questions, e.g., regarding the legal rights and obligations artificial intelligence systems can have. Or what should the criteria be for personhood for robots and artificial intelligence systems? And what about liability?

Speaking of liability: a bar exam recently included the question: “if a robot kills somebody, is it murder or product liability?” The question was inspired by an article in Slate Magazine, by Ryan Calo, which discussed Paolo Bacigalupi’s novel The Windup Girl. The novel is about an artificial girl, the Mika Model, which strives to copy human behaviour and emotions, and is designed to create its own individuality. In this particular case, the model seems to develop a mind of her own, and she ends up killing somebody who was torturing her. So Bacigalupi’s protagonist, Detective Rivera, finds himself asking a canonical legal question: when a robot kills, is it murder or product liability?

At present, the rule would still be that the manufacturer is liable. But that can change soon. AI systems can make their own decisions, and are becoming more and more autonomous What if intent can be proven? What if, as in Bacigalupi’s novel, the actions of the robot are acts of self-preservation? Can we say that the Mika Model acted in self-defence? Or, coming back to Sophia: what if she, as a Saudi Arabian citizen, causes damage? Or commits blasphemy? Who is liable, the system or its manufacturer?

At a panel discussion in the UK, a third option was suggested with regard to the liability issue. One expert compared robots and AI systems to pets, and the manufacturers to breeders. In his view, if a robot causes damage, the owner is liable, unless he can prove it was faulty, in which case the manufacturer could be held liable.

The discussion is not an academic one, as we can expect such cases to be handled by courts in the near future.  In January 2017, the European Parliament’s legal affairs committee approved a wide-ranging report that outlines a possible framework under which humans would interact with AI and robots. These items stand out in the report:

  • The report states there is a need to create a specific legal status for robots, one that designates them as “electronic persons,” which effectively gives robots certain rights, and obligations. (This effectively would create a third type of personhood, apart from natural and legal persons).
  • Fearing a robot revolution, the report also wants to create an obligation for designers of AI systems to incorporate a “kill switch” into their designs.
  • As another safety mechanism the authors of the report suggest that Isaac Asimov’s three laws of robotics should be programmed into AI systems, as well. (1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.)
  • Finally, the report also calls for the creation of a European agency for robotics and AI that would be capable of responding to new opportunities and challenges arising from technological advancements in robotics.

It won’t be too long before Robot Law becomes part of the regular legal curriculum.

Sources:

 

Legal AI and Bias

Justice is blind, but legal AI may be biased.

Like many advanced technologies, artificial intelligence (AI) comes with its advantages and disadvantages. Some of the potentially negative aspects of AI regularly make headlines. There is a fear that humans could be replaced by AI, and that AI might take our jobs. (As pointed out in a previous article, lawyers are less at risk of such a scenario: AI would perform certain tasks, but not take jobs, as only 23% of the work lawyers do can be automated at present). Others, like Elon Musk, predict doomsday scenarios if we start using AI in weapons or warfare. And there could indeed be a problem there: what if armed robotic soldiers are hacked, or have bad code and go rogue? Some predict that superintelligence (where AI systems become vastly more intelligent than human beings) and the singularity (i.e. the moment when AI systems become self-aware) are inevitable. The combination of both would lead to humans being the inferior species, and possibly being wiped out.

John Giannandrea, who leads AI at Google, does not believe these are the real problems with AI. He sees another problem, and it happens to be one that is very relevant to lawyers. He is worried about intelligent systems learning human prejudices. “The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” Giannandrea said.

The case that comes to mind is COMPAS, which is risk assessment software that is used to predict the likelihood of somebody being repeat offender. It is often used in criminal cases in the US by judges and parole boards. ProPublica is a Pulitzer Prize winning non-profit news organization. It decided to analyse how correct COMPAS was in its predictions. They discovered that COMPAS’ algorithms correctly predicted recidivism for black and white defendants at roughly the same rate. But when the algorithms were wrong, they were wrong in different ways for each race. African American defendants were almost twice as likely to be labelled a higher risk where they did not actually re-offend. And for Caucasian defendants the opposite mistake was made: they were more likely to be labelled lower risk by the software, while in reality they did re-offend. In other words, ProPublica discovered a double bias in COMPAS, one in favour of white defendants, and one against black defendants. (Note that COMPAS disputes those findings and argues the data were misinterpreted).

The problem of bias in AI is real. AI is being used in more and more industries, like housing, education, employment, medicine and law. Some experts are warning that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it. “It’s important that we be transparent about the training data that we are using, and are looking for hidden biases in it, otherwise we are building biased systems,” Giannandrea added.

Giannandrea correctly points out that the underlying problem is a problem of lack of transparency in the algorithms that are being used. “Many of the most powerful emerging machine-learning techniques are so complex and opaque in their workings that they defy careful examination.”

Apart of all the ethical implications, the fact that it is unclear how the algorithms come to a specific conclusion could have legal implications. The U.S. Supreme Court might soon take up the case of a Wisconsin convict who claims his right to due process was violated when the judge who sentenced him consulted COMPAS. The argument used by the defence is that the workings of the system were opaque to the defendant, making it impossible to know for what arguments a defence had to be built.

To address these problems, a new institute, the AI Now Institute (ainowinstitute.org) was founded. It produces interdisciplinary research on the social implications of artificial intelligence and acts as a hub for the emerging field focused on these issues. Their main mission consists of “Researching the social implications of artificial intelligence now to ensure a more equitable future.” They want to make sure that AI systems are sensitive and responsive to the complex social domains in which they are applied. To that end, we will need to develop new ways to measure, audit, analyse, and improve them.

Sources:

Legal Chatbots

One year ago, we wrote about the world’s first robot lawyer. Donotpay.co.uk was created by Joshua Browder. It is a website with a chatbot that started off with a single and free legal service: helping to appeal unfair parking tickets. When the article was published, the services was available in the UK, and in New York and Seattle. At the time, it had helped overturn traffic tickets to the value of 4 million dollars. Apart from appealing parking tickets, the website could already assist you, too, in claiming compensation if your flight was delayed. Since then, a lot has happened. By now, DoNotPay has successfully appealed traffic tickets to the amount of 10 million dollars. But, more importantly, its activities have expanded considerably. And in the last year, several other legal chatbots have seen the light of day, as well.

Let us start with DoNotPay. A first important expansion came in March 2017, when it started helping refugees claim asylum. Using its chatbot interface, DoNotPay can offer free legal aid to refugees seeking asylum in the US and Canada, and assists with asylum support in the UK.

A second, and far more massive expansion followed only days ago, on 12 July 2017, when DoNotPay started covering a much broader range of legal issues. Its new version can offer free assistance in 1,000 legal areas, and does so across all 50 US states, as well as in the UK. It can now, e.g., assist you in reporting harassment in the workplace, or to make a complaint about a landlord; or it can help you ask for more parental leave, dispute nuisance calls, fight a fraudulent purchase on your credit card… The new DoNotPay covers consumer and workplace rights, and a host of other issues.

Browder didn’t stop there. Because he wants to address the issues of ‘information asymmetry’ and ‘inequality of arms’, as of 14 July 2017, DoNotPay is opening up so that anyone can create legal bots for free, with no technical knowledge. If you want to create your own free legal chatbot, all you have to do is fill in this downloadable form, and send it to automation@donotpay.co.uk.

Another interesting legal chatbot, is Law Bot, which was created by a team of Cambridge University law students, consisting of Ludwig Bull, Rebecca Agliolo, Nadia Abdul and Jozef Maruscak. When Law bot was launched, it only dealt with aspects of criminal law in the UK. More specifically, the bot wanted to inform people who had been the victim of a crime about their legal rights. What had motivated the creators, was the observation that most advice from lawyers on legal rights of the victims of a crime felt like it was written mainly for the use of other lawyers, rather than to help inform the general public, who were in fact the people most in need of the information. The first version of Lawbot guided its users through a series of questions and answers that helped them to assess what, from a legal perspective, may have happened to them and what they should do next, such as to formally report a crime to the police.

A second Law Bot initiative was Divorce Bot. It asks its users questions via an internet-based interface to guide them through the early days of a divorce. The chatbot explores different scenarios with them, and helps clarify their exact legal position. It also explains legal terms that are commonly used in divorce, such as ‘irretrievable breakdown‘ and ‘decree nisi‘, and provides a comprehensive breakdown of the divorce process. It gives a breakdown of the costs and forms needed, too. This way, people (in the UK) know exactly what to expect, even before they talk to a lawyer.

One of Law Bot’s co-founders also launched an AI-driven case law search engine, called DenninX. The free application’s aim is to help lawyers and law students conduct legal research on English case law by making use of AI technology, such as natural language pre-processing and machine learning.

24 July 2017 is the launch date of a new and more expanded version of Law Bot, called Lawbot-X.  Lawbot-X will now cover seven countries: Great Britain, the US, Canada, Hong Kong, Singapore, Australia and New Zealand. It will also be available in Chinese, for markets such as Hong Kong. The new bot further adds a case outcome prediction capability to assess the chance of winning a legal claim that the bot has analysed. The free legal bot will also operate from a new platform and will be hosted on Facebook Messenger.

[Update 25 November 2017: in October 2017, Lawbot changed its name to Casecrunch].

Another useful chatbot for legal consumers is Billy Bot. Unlike the DoNotPay and Law bot chatbots, Billy Bot does not offer legal assistance, but helps you find a lawyer, barrister or solicitor, in the UK. Billy Bot was created by Stephen Ward, a career barristers’ clerk, and founder of clerk-oriented technology company Clerksroom. Billy Bot can interface with members of the public about some of the same preliminary legal questions that barristers’ clerks often handle. It can currently refer users to appropriate legal resources and pull information from the 350 barristers’ offices. Ward intends to give it access to other systems, including scheduling and case management capabilities. It currently answers questions on LinkedIn.

Next, we have Lawdroid, which was created by Tom Martin. Lawdroid is an intelligent legal chatbot that can help entrepreneurs in the US get started by incorporating their business on a smartphone for free. No lawyer is required. Lawdroid is available on Facebook Messenger. Lawdroid, too, has expanded its services, and the company that created the bot, now also makes legal chatbots for lawyers. Referring to the important rise of chatbots, they point out that there are over 100.000 of them already on Facebook.

[Update 25 November 2017: corrected an item with regard to Lawdroid].

Sources:

 

AI and Contracts

Artificial Intelligence (AI) is changing the way law is being practiced. One of the areas where AI, and more specifically Machine Learning (ML) has been making great strides recently is contract review. The progress is not even limited to reviewing contracts: automated contract generation, negotiation, e-signing and management are fast becoming a reality.

Using AI for contracts is the result of an ongoing evolution. Ever since lawyers started using word processors, they have tried to automate the process of creating contracts. Using advanced macros allowed them to turn word processors into act generators that used smart checklists to fill out templates and add or remove certain clauses. But now the available technology is sufficiently advanced to take it all a few steps further.

Some years ago, commercial lawyer Noory Bechor came to the realization that 80 percent of his work was spent reviewing contracts. As a lot of the work involved in reviewing contracts is fairly repetitive in nature, he figured the service could be done much cheaper, faster, and more accurately if it was done by a computer. So, in 2014, he started LawGeex, which probably was the first platform for automatized contract review. Users can upload a contract to LawGeex.com, and, within a reasonably short period of time (an hour on average), they receive a report that states which clauses do not meet common legal standards. The report also warns if any vital clauses could be missing, and where existing clauses might require further attention. All of this is done automatically, by algorithms.

By now, there are other players on the contract review market as well, and the technology is evolving further. At present, AI technology is able to scan contracts and decipher meaning behind the text, as well as identify problem areas that might require human intervention. This technology can scan millions of documents in a fraction of the time it would take humans (think ‘hours’ as opposed to ‘days’ or ‘weeks’). As a result, AI contract review has reached a point where it can already do 80% of the work a lawyer used to do. For the remaining 20%, it can, at present, not reach the level of skill and comprehension of a human attorney. AI contract review, therefore, focuses attorneys’ efforts on higher-level, nonstandard clauses and concerns, and away from more manual contract review obligations.

The progress made in Machine Learning algorithms means the usage of AI is not limited to contract review. Juro is a company that tries to automate the whole contracts workflow. It has developed an integrated workflow system that allows companies to save time on contracts through automated contract generation, negotiation, e-signing and management of contracts. For this, it relies on machine learning algorithms that try to understand the data within contracts and learn from it. This can be done, e.g., by analyzing all the contracts in a company’s ‘vault’ of historical contracts. Based on these contract analytics, Juro can also provide so-called ‘negotiation heatmaps,’ where customers can see at a glance which of their contract terms are being most hotly negotiated. Knowing what other customers have negotiated can help you (based on data) decide what the contract terms should be and what you should agree to in negotiations.

Another interesting evolution is the idea of ‘smart contracts’. Stephen Wolfram, the founder of Wolfram Alpha, believes contracts should be computable, and that a hybrid code/legalese language should be developed. One of the main advantages of such language would be that it would leave less room for ambiguity, especially when it comes to the implications of certain clauses. Computable contract language becomes more valuable to the legal sector, once we start using ‘smart contracts’ that are self-executing. There is also already some interesting work in this area, namely by Legalese.com based in Singapore. If law is going to be made computable then the world needs two things: lawyers who can code and a legal computer language that is an improvement on today’s legalese.

The next step would then be to move from ‘smart’ contracts to ‘intelligent’ contracts. Smart Contracts resemble computer code more than typical legal documents, relying on programing to create, facilitate, or execute contracts, with the contracts and conditions stored on a blockchain, or a distributed, relatively unhackable ledger. Intelligent contracts would not just be smart, but also rely on artificial intelligence (hence ‘intelligent’ contracts). In the words of Kevin Gidney, intelligent contracts would use an AI system that “is taught to continually and consistently recognize and extract key information from contracts, with active learning based on users’ responses, both positive and negative, to the extractions and predictions made”.

 

Sources: