In previous articles, we discussed the dangers of AI and the need for AI regulations. On 13 March 2024, the European Parliament approved the “Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts.” The act was a proposal by the EU Commission of 21 April 2021. The act is usually referred to by its short name of the Artificial Intelligence Act, or the EU AI Act. In this article we look at the following questions: what is the EU AI Act? What is the philosophy of the EU AI Act? We discuss the limited risk applications and the high-risk applications. Finally, we also look at the EU AI Act’s entry into force and the penalties.
What is the EU AI Act?
As the full title suggests, it is a regulation that lays down harmonised rules on artificial intelligence across the EU. Rather than focusing on specific applications, it deals with the risks that applications pose, and categorizes them accordingly. The Act imposes stringent requirements for high-risk categories to ensure safety and fundamental rights are upheld. The Act’s recent endorsement follows a political agreement reached in December 2023, reflecting the EU’s commitment to a balanced approach that fosters innovation while addressing ethical concerns.
Philosophy of the EU AI Act
The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular for small and medium-sized enterprises (SMEs). The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.
The AI Act ensures that Europeans can trust what AI has to offer. Because AI applications and frameworks can change rapidly, the EU chose to address the risks that applications pose. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes. The Act distinguishes four levels of risk:
- Unacceptable risk: applications with unacceptable risk are never allowed. All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour.
- High risk: to be allowed high risk applications must meet stringent requirements to ensure safety and fundamental rights are upheld. We have a look at those below.
- Limited risk: applications are considered to pose limited risk when they lack transparency, and the users of the application may not know what their data are used for or what that usage implies. Limited risk applications can be allowed if they comply with specific transparency obligations.
- Minimal or no risk: The AI Act allows the free use of minimal-risk and no risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
Let us have a closer look at the limited and high-risk applications.
Limited Risk Applications
As mentioned, limited risk refers to the risks associated with a lack of transparency. The AI Act introduces specific obligations to ensure that humans are sufficiently informed when necessary. E.g., when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can make an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes.
High Risk Applications
Under the EU AI Act, high-risk AI systems are subject to strict regulatory obligations due to their potential impact on safety and fundamental rights.
What are high risk applications?
As mentioned, all AI systems considered a clear threat to the safety, livelihoods and rights of people are considered high-risk. These systems are classified into three main categories: a) those covered by EU harmonisation legislation, b) those that are safety components of certain products, and c) those listed as involving high-risk uses. Specifically, high-risk AI includes applications in critical infrastructure, such as traffic control and utilities management, biometric and emotion recognition systems. It also applies to AI used in education and employment for decision-making processes.
What are the requirements for high-risk applications?
High-risk AI systems under the EU AI Act are subject to a comprehensive set of requirements designed to ensure their safety, transparency, and compliance with EU standards. These systems must have a risk management system in place to assess and mitigate potential risks throughout the AI system’s lifecycle. Data governance and management practices are crucial to ensure the quality and integrity of the data used by the AI, including provisions for data protection and privacy. Providers must also create detailed technical documentation that covers all aspects of the AI system, from its design to deployment and maintenance.
Furthermore, high-risk AI systems require robust record-keeping mechanisms to trace the AI’s decision-making process. This is essential for accountability and auditing purposes. Transparency is another key requirement, necessitating clear and accessible information to be provided to users and ensuring they understand the AI system’s capabilities and limitations. Human oversight is mandated to ensure that AI systems do not operate autonomously without human intervention, particularly in critical decision-making processes. Lastly, these systems must demonstrate a high level of accuracy, robustness, and cybersecurity to prevent errors and protect against threats.
The EU AI Act’s entry into force
The act will enter into force two years after it was approved, i.e., on 13 March 2026. This gives member states the opportunity to implement compliant legislation. It also gives providers a two-year window to adapt to the regulation.
The European AI Office, established in February 2024 within the Commission, will oversee the AI Act’s enforcement and implementation with the member states.
Penalties
The EU AI Act enforces a tiered penalty system to ensure compliance with its regulations. For the most severe violations, particularly those involving prohibited AI systems, fines can reach up to €35 million or 7% of the company’s worldwide annual turnover, whichever is higher. Lesser offenses, such as providing incorrect, incomplete, or misleading information to authorities, may result in penalties up to €7.5 million or 1% of the total worldwide annual turnover. These fines are designed to be proportionate to the nature of the infringement and the size of the entity, reflecting the seriousness of non-compliance within the AI sector.
Conclusion
The EU AI Act represents a significant step in the regulation of artificial intelligence within the EU. It sets a precedent as the first comprehensive legal framework on AI worldwide. The Act mandates a high level of diligence, including risk assessment, data quality, transparency, human oversight, and accuracy for high-risk systems. Providers and deployers must also adhere to strict requirements regarding registration, quality management, monitoring, record-keeping, and incident reporting. This framework aims to ensure that AI systems operate safely, ethically, and transparently within the EU. Through these efforts, the European AI Office strives to position Europe as a leader in the ethical and sustainable development of AI technologies.
Sources:
- https://artificialintelligenceact.eu/ai-act-explorer & https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf
- https://eur-lex.europa.eu/legal-content/NL/TXT/HTML/?uri=CELEX:52021PC0206
- https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
- https://cuccibu.com/blogs/de-ai-act-de-leidende-rol-van-de-europese-unie-op-artificiele-intelligentie/
- https://en.wikipedia.org/wiki/Artificial_Intelligence_Act
- https://nl.wikipedia.org/wiki/Verordening_Kunstmatige_Intelligentie
- https://www.reuters.com/technology/eu-clinches-deal-landmark-ai-act-2023-12-09/
- https://www.bbc.com/news/technology-68546450
- https://www.law.com/legaltechnews/2024/03/13/european-parliament-passes-landmark-ai-act-worlds-first-comprehensive-law-regulating-artificial-intelligence-397-83402/