Every day, artificial intelligence (AI) is becoming more entrenched in our lives. Even cheap smart phones have cameras that use AI to optimize the pictures we are taking. Try getting online assistance for a problem you are facing, and you are likely to first be met by a chatbot rather than a person. We have self-driving cars, trucks, buses, taxis, trains, etc. AI can be a force for good, but it can also be a force for bad. Cybercriminals are using AI to steal identities and corporate secrets, to gain illegal access to systems, transfer funds, avoid police detection, etc. AI is being weaponized and militarized. This raises ethical concerns, and the possibility that legal frameworks will have to be implemented to address those concerns.
Let us first touch upon some of the ethical problems we are already being confronted with. The use of facial recognition software that is being implemented in airports and big cities raises both privacy and security concerns. The same concerns pertain to using big data for Machine Learning. In previous articles, we already paid attention to the problem of bias in AI, where the AI algorithms inherit our biases because they are reflected in the data sets they use. One of the areas where the ethical issues of AI really come to the forefront is with self-driving vehicles. Let us explore that example more in depth.
Sometimes, traffic accidents cannot be avoided, and those may lead to fatalities. Imagine the brakes of your car stop functioning, while you are driving down a street. Ahead of you are some children getting out of car that is standing still, in the lane for oncoming traffic a truck is coming, and on the far side of the road some people are on the pavement talking. What do you do? And what is a self-driving car supposed to do? With self-driving cars, the car maker may have to make the decision for you.
In ethics, this problem is usually referred to as the Trolley Problem. A runaway trolley is racing down a railroad track, and you are standing at a switch that can change the track it is on. If you do not do a thing, five people will be killed. If you switch the lever, one person will be killed. What is the right thing to do?
The Moral Machine experiment is the name of an online project where different variations of the Trolley Problem were presented to people from all over the world. It asked questions to determine whether saving humans should be prioritized over animals (including pets), passengers over pedestrians, more lives over fewer, men over women, young over old, etc. It even asked whether healthy and fit people should be prioritized over sick ones, people with a high social status over people with a low social status, or law-abiding citizens over ones with criminal records. Rather than posing the questions directly the survey typically would present people with combined options: kill three elderly pedestrians or three youthful passengers?
Overall, the experiment gathered 40 million decisions in 10 languages from millions of people in 233 countries and territories. Surprisingly, the results tended to vary greatly from country to country, from culture to culture and along lines of economics. “For example, participants from collectivist cultures like China and Japan are less likely to spare the young over the old—perhaps, the researchers hypothesized, because of a greater emphasis on respecting the elderly. Similarly, participants from poorer countries with weaker institutions are more tolerant of jaywalkers versus pedestrians who cross legally. And participants from countries with a high level of economic inequality show greater gaps between the treatment of individuals with high and low social status.” (Karen Hao, in Technology Review)
In general, people agreed across the world that sparing the lives of humans over animals should take priority, and that many people should be saved rather than few. In most countries, people also thought the young should be preserved over the elderly, but as mentioned above, that was not the case in the Far East.
Now, this of course raises some serious questions. Who is going to make those decisions and what will they be choosing, considering these different choices people suggested? Are we going to have different priorities depending on whether we are using e.g. Japanese or German self-driving cars? Or will the car makers have the car make different choices based on where the car is driving? And what if more lives can be spared if we sacrifice the driver?
When it comes to sacrificing the driver, one car manufacturer, Mercedes, has already made clear that will never be an option. The justification they give, is that self-driving cars will lead to far fewer accidents and fatalities, and that those occasions where pedestrians are sacrificed for drivers will be cases of acceptable collateral damage. But is that the right choice, and is it really up to the car maker to make that choice?
An ethicist identified four chief concerns that must be addressed when we look for solutions with regard to ethical AI:
- Whose moral standards should be used?
- Can machines converse about moral issues? (What if e.g. multiple self-driving vehicles are involved? Will they communicate with each other to choose the best scenario?)
- Can algorithms take context into account?
- Who should be accountable?
Based on these considerations, some principles can be established to regulate the use of AI. In a previous article we already mentioned the principles the EU and OECD suggest. In 2018, the World Economic Forum also had already suggested 5 core principles to keep AI ethical:
- AI must be a force for good and diversity
- Intelligibility and fairness
- Data protection
- Flourishing alongside AI
- Confronting the power to destroy
An initiative that involves several tech companies also identified seven critical points:
- Invite ethics experts that reflect the diversity of the world
- Include people who might be negatively impacted by AI
- Get board-level involvement
- Recruit an employee representative
- Select an external leader
- Schedule enough time to meet and deliberate
- Commit to transparency
A deeper question, however, is whether the regulation of AI should really be left to the industry? Shouldn’t these decisions rather be made by governments? The people behind the Moral Machine experiment think so, as do many scientists and experts in ethics. Thus far, however, not much has been done when it comes to legal solutions. At present, there are no legal frameworks in place. The best we have is for members of the EU and the OECD who have put some guidelines in place, but those are merely guidelines that are not enforceable. And that is not enough. A watchdog organization in the UK warned that AI is progressing so fast that we already are having difficulties catching up. We cannot afford postponing addressing these issues any longer.
Sources:
- www.fastcompany.com/3064539/self-driving-mercedes-will-be-programmed-to-sacrifice-pedestrians-to-save-the-driver
- www.fastcompany.com/3054675/in-an-accident-who-will-a-driverless-car-be-programmed-to-kill
- www.zdnet.com/article/mit-reveals-who-self-driving-cars-should-kill-the-cat-the-elderly-or-the-baby/
- www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/
- www.zdnet.com/article/we-are-still-playing-catch-up-with-ai-and-its-a-dangerous-game/
- www.zdnet.com/article/europe-unveils-its-big-ai-strategy-others-are-abusing-ai-that-mustnt-happen-here/
- www.nextgov.com/emerging-tech/2018/03/government-can-tackle-ethics-artificial-intelligence-better-industry-experts-say/146349/
- www.macleans.ca/opinion/ensuring-that-artificial-intelligence-is-ethical-thats-everyones-responsibility/
- www.businessinsider.com/an-ethicist-explains-his-4-chief-concerns-about-artificial-intelligence-2017-8
- www.techrepublic.com/article/7-signs-of-substantive-google-ai-governance/
- www.forbes.com/sites/workday/2019/05/15/our-six-principles-for-ethically-developing-machine-learning/
- weforum.org/agenda/2018/04/keep-calm-and-make-ai-ethical/
- www.zdnet.com/article/the-ethical-challenges-of-artificial-intelligence/
- www.wired.co.uk/article/artificial-intelligence-ethical-framework
- www.nature.com/articles/s41586-018-0637-6
- www.zdnet.com/article/ai-and-ethics-the-debate-that-needs-to-be-had/