A few months ago, in January 2018, the European Parliament’s Legal Affairs Committee approved a report that outlines a possible legal framework to regulate the interactions between a) humans, and b) robots and Artificial Intelligence systems. The report is quite revolutionary. It proposes, e.g., giving certain types of robots and AI systems personhood, as “electronic persons”: These electronic persons would have rights and obligations, and the report suggests that they should obey Isaac Asimov’s Laws of Robotics. The report also advises that the manufacturers of robots and AI systems should build in a ‘kill switch’ to be able to deactivate them. Another recommendation is that a European Agency for Robotics and AI be established that would be capable of responding to new opportunities and challenges arising from technological advancements in robotics.
The EU is not alone in its desire to regulate AI: similar (though less far reaching) reports were published in Japan and in the UK. These different initiatives are in effect the first attempts at creating Robot Law.
So, what is Robot Law? On the blog of the Michalsons Law Firm, Emma Smith describes Robot Law as covering “a whole variety of issues regarding robots including robotics, AI, driverless cars and drones. It also impacts on many human rights including dignity and privacy.” It deals with the rights and obligations of AI systems, manufacturers, consumers, and the public at large in its relationship to AI and how it is being developed and used. As such, it is different from, and far broader than Asimov’s Laws of Robotics which only apply to laws robots have to obey.
Why would we need Robot Law? For a number of reasons. AI has become an important contributing factor to the transformation of society, and that transformation is happening extremely fast. The AI Revolution is often compared to the Industrial Revolution, but that comparison is partly flawed, because of the speed, scale and pervasiveness of the AI Revolution. Some reports claim that the AI Revolution is happening up to 300 times faster than the Industrial Revolution. This partly has to do with the fact AI is already being used everywhere, and that pervasiveness is only expected to increase rapidly. Think, e.g., of the Internet of Things, where everything is connected to the Internet, and massive amounts of data are being mined.
The usage of AI already raises legal issues of control, privacy, and liability. Further down the line we will be confronted with issues of personhood and Laws of Robotics. But AI also has wide-reaching societal effects. Think, e.g., of the job market and the skill sets that are in demand. These will change dramatically. In the US alone, driverless cars and trucks, e.g., will see a minimum of 3 million drivers lose their jobs. So, yes, there is a need for Robot Law.
Separate from the question of whether we need Robot Law, is the question whether we already need legislation now, and/or how much should be regulated at this stage. When trying to answer that question, we are met with almost diametrically opposing views.
The Nay-sayers claim that it is still too soon to start thinking about Robot Law. The argument is that AI and Robotics are still in their infancy, and at this stage there is a need first to explore and develop it further. Not only are there still too many unanswered questions, but their view is that regulation at this stage could stifle the progress of AI. All we would have to do, is adapt existing laws. In that context, Roger Bickerstaff, e.g., speaks of:
- Facilitative changes – these are changes to law that are needed to enable the use of AI.
- Controlling changes – these are changes to law and new laws that may be needed to manage the introduction and scope of operation of robotics and artificial intelligence.
- Speculative changes – these are the changes to the legal environment that may be needed as robotics and AI start to approach the same level of capacity and capability as human intelligence – what is often referred to as the singularity point.
Others, like the authors of the aforementioned reports, disagree. They argue that there already are issues of privacy, control, and liability. There also is the problem of transparency: how do Neural Networks come to their conclusions, e.g., when they recommend whether somebody is eligible for parole, or a loan, or when they assess risks, e.g., for insurances. How does one effectively appeal against such decisions if it’s not known how the AI system reaches its conclusions? Furthermore, the speed, scale and pervasiveness of the AI Revolution and its societal effects, demand a proactive approach. If we don’t act now, we will soon be faced with problems that we know will arise.
Finally, in his paper, Ryan Calo points out, maybe surprisingly, that there already is over half a century of case law with regard to robots. These cases deal with both robots as objects and robots as subjects. He rightfully points out that “robots tend to blur the lines between person and instrument”. A second, and more alarming insight of his study was “that judges may have a problematically narrow conception of what a robot is”. For that reason alone, it would already be worthwhile to start thinking about Robot Law.
Sources:
- www.michalsons.com/blog/laws-of-robotics-v-robot-law/18476
- www.theatlantic.com/technology/archive/2016/03/a-brief-history-of-robot-law/474156/
- papers.ssrn.com/sol3/papers.cfm?abstract_id=2737598
- www.lawgazette.co.uk/law/call-for-legislation-to-govern-ai/5061524.article
- www.ibtimes.com/artificial-intelligence-eu-debate-robots-legal-rights-after-committee-calls-mandatory-2475055
- www.zdnet.com/article/artificial-intelligence-legal-ethical-and-policy-issues/
- digitalbusiness.law/2017/02/do-we-need-robot-law/
- www.mondaq.com/x/565682/new+technology/Legal+Aspects+of+Artificial+Intelligence
- www.huffingtonpost.com/entry/should-artificial-intelligence-be-regulated-experts_us_59a7a0e7e4b00ed1aec9a5c2