Tag Archives: robots

Robots, Liability and Personhood

There were some interesting stories in the news lately about Artificial Intelligence and robots. There was the story, e.g., of Sophia, the first robot to officially become a citizen of Saudi Arabia. There also was a bar exam in the States that included the question whether we are dealing with murder or product liability, if a robot kills somebody. Several articles were published on companies that build driverless cars and how they have to contemplate ethical and legal issues when deciding what to do when an accident is unavoidable. If a mother with a child on her arm, e.g., unexpectedly steps in front of the car, what does the car do? Does it hit the mother and child? Does it avoid the mother to hit the motorcycle in the middle lane that is waiting to turn? Or does it go into the lane of oncoming traffic, with the risk of killing the people in the driverless car? All these stories raise some thought-provoking legal issues. In this article we’ll have a cursory glance at personhood and liability with regard to Artificial Intelligence systems.

Let’s start with Sophia. She is a robot designed by Hong Kong-based AI robotics company Hanson Robotics. Sophia is programmed to learn and mimic human behaviour and emotions, and functions autonomously. When she has a conversation with someone, her answers are not pre-programmed. She made world headlines when in October 2017, she attended a UN meeting on artificial intelligence and sustainable development and had a brief conversation with UN Deputy Secretary-General Amina J. Mohammed.

Shortly after, when attending a similar event in Saudi Arabia, she was granted citizenship in the country.  This was rightfully labelled as a historical event, as it was the first time ever a robot or AI system was granted such a distinction. This instantly raises many legal questions, e.g., regarding the legal rights and obligations artificial intelligence systems can have. Or what should the criteria be for personhood for robots and artificial intelligence systems? And what about liability?

Speaking of liability: a bar exam recently included the question: “if a robot kills somebody, is it murder or product liability?” The question was inspired by an article in Slate Magazine, by Ryan Calo, which discussed Paolo Bacigalupi’s novel The Windup Girl. The novel is about an artificial girl, the Mika Model, which strives to copy human behaviour and emotions, and is designed to create its own individuality. In this particular case, the model seems to develop a mind of her own, and she ends up killing somebody who was torturing her. So Bacigalupi’s protagonist, Detective Rivera, finds himself asking a canonical legal question: when a robot kills, is it murder or product liability?

At present, the rule would still be that the manufacturer is liable. But that can change soon. AI systems can make their own decisions, and are becoming more and more autonomous What if intent can be proven? What if, as in Bacigalupi’s novel, the actions of the robot are acts of self-preservation? Can we say that the Mika Model acted in self-defence? Or, coming back to Sophia: what if she, as a Saudi Arabian citizen, causes damage? Or commits blasphemy? Who is liable, the system or its manufacturer?

At a panel discussion in the UK, a third option was suggested with regard to the liability issue. One expert compared robots and AI systems to pets, and the manufacturers to breeders. In his view, if a robot causes damage, the owner is liable, unless he can prove it was faulty, in which case the manufacturer could be held liable.

The discussion is not an academic one, as we can expect such cases to be handled by courts in the near future.  In January 2017, the European Parliament’s legal affairs committee approved a wide-ranging report that outlines a possible framework under which humans would interact with AI and robots. These items stand out in the report:

  • The report states there is a need to create a specific legal status for robots, one that designates them as “electronic persons,” which effectively gives robots certain rights, and obligations. (This effectively would create a third type of personhood, apart from natural and legal persons).
  • Fearing a robot revolution, the report also wants to create an obligation for designers of AI systems to incorporate a “kill switch” into their designs.
  • As another safety mechanism the authors of the report suggest that Isaac Asimov’s three laws of robotics should be programmed into AI systems, as well. (1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.)
  • Finally, the report also calls for the creation of a European agency for robotics and AI that would be capable of responding to new opportunities and challenges arising from technological advancements in robotics.

It won’t be too long before Robot Law becomes part of the regular legal curriculum.

Sources: