In previous articles, we have discussed how artificial intelligence is biased, and on how this problem of biased artificial intelligence persists. As artificial intelligence (AI) is becoming ever more prevalent, this poses many ethical problems. The question was raised whether the industry could be trusted to self-regulate or whether legal frameworks would be necessary. In this article, we explore current initiatives for Artificial Intelligence regulation. We look at initiatives within the industry to regulate artificial intelligence as well as at attempts to create legal frameworks for Artificial Intelligence. But first we investigate why regulation is necessary.
Why is Artificial Intelligence Regulation necessary?
Last year, the Council of Europe published a paper where it concluded that a legal framework was needed because there were substantive and procedural gaps. UNESCO, too, identified key issues in its Recommendation on Ethics in Artificial Intelligence. Similarly, in its White Paper on Trustworthy AI, The Mozilla Foundation identifies a series of key challenges that need to be addressed and that makes regulation desirable. These are:
- Monopoly and centralization: Large-scale AI requires a lot of resources and at present only a handful of tech giants have those. This has a stifling effect on innovation and competition.
- Data privacy and governance: Developing complex AI systems necessitates vast amounts of data. Many AI systems that are currently being developed by large tech companies harvest people’s personal data through invasive techniques, and often without their knowledge or explicit consent.
- Bias and discrimination: As was discussed in previous articles, AI relies on computational models, data, and frameworks that reflect existing biases. This in turn results in biased or discriminatory outcomes.
- Accountability and transparency: Many AI systems just present an outcome without being able to explain how that result was reached. This can be the product of the algorithms and machine learning techniques that are being used, or it may be by design to maintain corporate secrecy. Transparency is needed for accountability and to allow third-party validation.
- Industry norms: Tech companies tend to build and deploy tech rapidly. As a result, many AI systems are embedded with values and assumptions that are not questioned in the development cycle.
- Exploitation of workers: Research shows that tech workers who perform the invisible maintenance of AI are vulnerable to exploitation and overwork.
- Exploitation of the environment: The amount of energy needed for AI data mining makes it very environment unfriendly. The development of large AI systems intensifies energy consumption and speeds up the extraction of natural resources.
- Safety and security: Cybercriminals have embraced AI. They are able to carry out increasingly sophisticated attacks by exploiting AI systems.
For all these reasons, the regulation of AI is necessary. Many large tech companies still promote the idea that the industry should be allowed to regulate itself. Many countries, as well as the EU, on the other hand believe the time is ripe for governments to impose a legal framework to regulate AI.
Initiatives within the industry to regulate Artificial Intelligence
Firefox and the Mozilla Foundation
The Mozilla Foundation is one of the leaders in the field when it comes to promoting trustworthy AI. They already have launched several initiatives, including advocacy campaigns, responsible computer science challenges, research, funds, and fellowships. The Foundation also points out that “developing a trustworthy AI ecosystem will require a major shift in the norms that underpin our current computing environment and society. The changes we want to see are ambitious, but they are possible.” They are convinced that the “best way to make this happen is to work like a movement: collaborating with citizens, companies, technologists, governments, and organizations around the world.”
IBM
IBM, too, promotes an ethical and trustworthy AI, and has created its own ethics board. It believes AI should be built on the following principles:
- The purpose of AI is to augment human intelligence
- Data and insights belong to their creator
- Technology must be transparent and explainable
To that end, it identified five pillars:
- Explainability: Good design does not sacrifice transparency in creating a seamless experience.
- Fairness: Properly calibrated, AI can assist humans in making fairer choices.
- Robustness: As systems are employed to make crucial decisions, AI must be secure and robust.
- Transparency: Transparency reinforces trust, and the best way to promote transparency is through disclosure.
- Privacy: AI systems must prioritize and safeguard consumers’ privacy and data rights.
Google says it “aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good.
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be built and tested for safety
- Be accountable to people
- Incorporate privacy design principles
- Uphold high standards of scientific excellence
- Be made available for uses that accord with these principles.”
It also made it clear that it “will not design or deploy AI in the following application areas:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.”
It adds that that list may evolve.
Still, Google seems to have a troubled relationship with ethical AI. It notoriously fired its entire ethics board in 2019, to replace it with a team of ethical AI researchers. When subsequently, on separate occasions, two of those were fired too, it again made headlines.
Facebook / Meta
Whereas others talk about trustworthy and ethical Ai, Meta (the parent company of Facebook) on the other hand has different priorities and talks about responsible AI. It, too, identifies five (or ten) pillars:
- Privacy & Security
- Fairness & Inclusion
- Robustness & Safety
- Transparency & Control
- Accountability & Governance
Legal frameworks for Artificial Intelligence
Apart from those initiatives within the industry, there are proposals for legal frameworks as well. Best known is the EU AI Act. Others are following suit.
The EU AI Act
The EU describes its AI act as “a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. The law assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. Lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.”
The text can be misleading as, effectively, the proposal distinguishes not three but four levels of risk for AI applications: 1) unacceptable risk, which are banned, 2) high-risk, which must be regulated with specific legal requirements, 3) low risk, where most of the time no regulation will be necessary, and 4) no risk, which do not have to be regulated at all.
By including an ‘unacceptable risk‘ category, the proposal introduces the idea that certain types of AI applications should be forbidden because they violate basic human rights. All applications that manipulate human behaviour to deprive users of their free will, as well as systems that allow social scoring fall in this category. Exceptions are allowed for military purposes and law enforcement purposes.
High risk systems “include biometric identification, management of critical infrastructure (water, energy etc), AI systems intended for assignment in educational institutions or for human resources management, and AI applications for access to essential services (bank credits, public services, social benefits, justice, etc.), use for police missions as well as migration management and border control.” Again, there are exceptions, many of which have to do with cases where biometric identification is allowed. These include, e.g., missing children, suspects of terrorism, trafficking, and child pornography. The EU wants to create a database that keeps track of all high-risk applications.
Limited risk or low risk applications includes various bots which companies use to interact with their customers. The idea here is that transparency is required. Users must know, e.g., that they are interacting with a chat bot and to what information the chat bot has access.
All AI systems that do not pose any risk to citizen’s rights are considered no risk applications for which no regulation is necessary. These applications include games, spam filters, etc.
Who does the EU AI Act apply to? As is the case with the GDPR, the EU AI Act does not apply exclusively to EU-based organizations and citizens. It also applies to anybody outside of the EU who is offering an AI application (product or service) within the EU, or if an AI system uses information about EU citizens or organizations. Furthermore, it also applies to systems outside of the EU that use results that are generated by AI systems within the EU.
A work in progress: the EU AI Act is still very much a work in progress. The Commission made its proposal and now the legislators can give feedback. At present, more than a thousand amendments have been submitted. Some factions think the framework goes too far, while others claim it does not go far enough. Much of the discussions deal with both defining and categorizing AI systems.
Other noteworthy initiatives
Apart from the European AI Act, there are some other noteworthy initiatives.
Council of Europe: The Council of Europe (responsible for the European Convention on Human Rights) created its own Ad Hoc Committee on Artificial Intelligence. This Ad Hoc Committee published a paper in 2021, called A Legal Framework for AI Systems. The paper was a feasibility study explored the reasons as to why a legal framework on the development, design, and application of AI, based on Council of Europe’s standards on human rights, democracy and the rule of law is needed. It identified several substantive and procedural gaps and concluded that a comprehensive legal framework is needed, combining both binding and non-binding instruments.
UNESCO published a series of Recommendations on Ethics of Artificial Intelligence, which were endorsed by 193 countries in November 2021.
US: On 4 October, the White House released the Blueprint for an AI Bill of Rights to set up a framework that can protect people from the negative effects of AI.
No government initiatives exist yet in the UK. But Cambridge University, on 16 September 2022, published a paper on A Legal Framework for Artificial Intelligence Fairness Reporting.
Sources:
- https://thecritic.co.uk/issues/june-2022/good-and-evil-on-the-new-frontier/
- https://cepa.org/europes-artificial-intelligence-debate-heats-up/
- https://www.ibm.com/artificial-intelligence/ethics
- https://foundation.mozilla.org/en/blog/trustworthy-ai-abridged-version/
- https://ai.google/principles
- https://www.bbc.com/news/technology-47825833
- https://ai.facebook.com/blog/facebooks-five-pillars-of-responsible-ai/
- https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
- https://artificialintelligenceact.eu
- https://www.caidp.org/resources/eu-ai-act/
- https://www.zdnet.com/article/the-eu-ai-act-what-you-need-to-know/
- https://edoc.coe.int/en/artificial-intelligence/9648-a-legal-framework-for-ai-systems.html
- https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
- https://www.zdnet.com/article/the-white-house-passes-an-ai-bill-of-rights-that-attempts-to-put-your-privacy-concerns-at-ease/
- https://www.cambridge.org/core/journals/cambridge-law-journal/article/legal-framework-for-artificial-intelligence-fairness-reporting/C2D73FBE9BB74E5D41DDA6BDCA208424