In the past, we have discussed the need for artificial intelligence regulations. The first important initiative was the OECD establishing a series of non-binding guidelines in 2019. Another milestone was the EU AI Act of March 2024. Also in 2024, there were some other important regulation frameworks that were introduced. In this article, we will have a look at the Council of Europe Framework Convention on AI, at the United Nations’ AI Resolution, as well as at other artificial intelligence regulations and initiatives, including The Responsible AI in the Military Domain (REAIM) summit in Seoul.
Council of Europe Framework Convention on AI
The full name of the Council of Europe’s Framework Convention on AI is the Council of Europe Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. It is the first legally binding international treaty that specifically focuses on regulating artificial intelligence (AI) in line with fundamental rights and values. These are meant to ensure that AI systems respect human rights, support democratic principles, and adhere to the rule of law.
The convention is an initiative of the Council of Europe, which is an international organization that was founded in 1949. Its goals are similar to the UN’s Declaration of Human Rights. It has 46 member states and focuses on promoting human rights, democracy, and the rule of law in Europe.
The history of the framework convention on AI began in 2020 when the Council recognized the need for a legal framework for AI. In 2021, they launched discussions among member states and experts. The aim was to draft a convention that would safeguard fundamental rights while fostering innovation. In 2023, the Council presented a draft of the convention. The framework was officially adopted on 17 May 2024, and was opened for signatures from 5 September 2024 to countries both within and outside Europe, making it a globally significant agreement. Notable is that apart from the 46 Council of Europe member states, another 11 countries – including the US – have signed it as well. More may follow.
Much like the EU AI act, the convention introduces a risk-based approach, addressing the design, deployment, and decommissioning of AI systems. It emphasizes transparency, accountability, and fairness while encouraging responsible innovation. High-risk AI applications, such as those with the potential to harm human rights, are subject to strict oversight. The treaty also allows flexibility for private actors to comply through alternative methods and includes exemptions for research and national security purposes.
This framework is crucial as it provides a common international standard for managing AI’s potential benefits and risks. It promotes trust in AI technologies by ensuring safeguards against misuse and unintended consequences while fostering innovation. The convention aligns closely with the European Union’s AI Act, reinforcing a shared commitment to ethical AI governance on a global scale.
The treaty is also important because of its ability to shape how AI is integrated into societies, balancing innovation with protecting democratic values. It seeks to protect individuals’ rights. AI systems can make decisions that affect people’s lives, such as in job recruitment or law enforcement. The convention safeguards that these systems are fair and transparent is crucial. The convention also promotes accountability. It requires AI developers and users to take responsibility for their systems. This helps build trust between the public and technology. Furthermore, the convention supports democracy. It emphasizes the need for public participation in discussions about AI. This ensures that diverse voices are heard in shaping policies. Finally, it sets a precedent and standard for other countries. If Europe leads in AI regulation, other regions may follow. This can create a global framework for responsible AI use.
The United Nations’ AI Resolution
On March 21, 2024, the United Nations General Assembly adopted its first-ever and non-binding resolution on artificial intelligence (AI). This resolution promotes the development of “safe, secure, and trustworthy” AI systems. It is another significant step in creating global norms for managing AI, which also aims to ensure the technology benefits humanity while addressing its risks. The resolution was led by the United States and co-sponsored by 123 countries, receiving unanimous support from all 193 UN member states.
Here, too, the history of this resolution traces back to the rapid growth of AI technology. As AI started to impact various sectors, concerns about its effects on society grew. We are talking about issues like privacy, bias, and the potential for misuse, which all became prominent. In response, the UN began discussions about how to address these challenges. In 2023, member states began drafting the resolution. After extensive negotiations, the resolution was adopted in March 2024.
The resolution recognizes the transformative potential of AI in addressing global challenges, such as achieving the United Nations’ Sustainable Development Goals. It encourages international cooperation to bridge digital divides, especially between developed and developing countries. One of its goals is to ensure equitable access to AI technologies. Member states are urged to regulate AI systems to protect human rights and privacy, avoid risks, and promote innovation.
Like other regulatory initiatives, this one also underscores the need for global collaboration in governing AI. There is a growing consensus that international regulation is critical to harnessing its benefits responsibly. The resolution aligns with similar efforts, like the European Union’s AI Act and the Council of Europe’s Framework Convention. It emphasizes the importance of ethical, human-centric AI development. It aims to prevent harm while promoting trust in AI systems globally.
This resolution’s importance lies in its acknowledgment of AI’s dual potential: as a tool for progress and a source of risks if left unchecked. It, too, provides a foundation for international frameworks to guide AI use in a way that supports sustainable development and safeguards fundamental rights.
Other artificial intelligence regulations and initiatives
The Global AI Safety Summit
The Global AI Safety Summit is a recurring international conference that discusses the safety and regulation of artificial intelligence (AI). The first Global AI Safety Summit was held on November 1–2, 2023 at Bletchley Park in Milton Keynes, United Kingdom. The summit’s goals included:
- Developing a shared understanding of the risks of frontier AI
- Establishing areas for collaboration on AI safety research
- Launching the UK Artificial Intelligence Safety Institute
- Testing frontier AI systems against potential harms
The summit was concluded with the Bletchley Declaration, which was signed by 28 nations.
The second Global AI Safety Summit in September 2024 was co-hosted by Britain, South Korea, and others. (See below: The Responsible AI in the Military Domain summit). The third Global AI Safety summit will be held in February 2025 in France.
The Global AI summit brings together international governments, leading AI companies, civil society groups, and experts in research. The summit aims to a) consider the risks of AI, especially at the frontier of development, b) discuss how to mitigate those risks through internationally coordinated action, and c) to understand and mitigate risks of emerging AI while seizing opportunities. The overall goal is the prevention and mitigation of harms from AI, which could be deliberate or accidental. These harms could be physical, psychological, or economic.
The Responsible AI in the Military Domain (REAIM) summit in Seoul
The Responsible AI in the Military Domain (REAIM) summit was held in Seoul on 10 September 2024. About 60 countries including the United States endorsed a “blueprint for action” to govern responsible use of artificial intelligence (AI) in the military. Important is that China did not endorse this blueprint.
The summit was a follow-up to one held in The Hague in 2023 where countries agreed upon a call to action on the topic, that would not be legally binding.
The US AI Safety Summit
A separate global AI safety summit was planned by the Biden administration in the US. The idea is similar: it wants to bring the leading stakeholders together to identify key issues, and to suggest ideas for a regulation framework. President-elect Trump, however, had indicated that he would undo any such framework, so the plans were put on hold.
Sources:
- https://rm.coe.int/1680afae3c – text of the CoE Framework Convention on AI.
- https://fpf.org/blog/the-worlds-first-binding-treaty-on-artificial-intelligence-human-rights-democracy-and-the-rule-of-law-regulation-of-ai-in-broad-strokes
- https://fpf.org/wp-content/uploads/2024/06/Council-of-Europe-Framework-Convention-on-AI.pdf – two page fact sheet.
- https://www.reuters.com/technology/artificial-intelligence/us-britain-eu-sign-agreement-ai-standards-ft-reports-2024-09-05/
- https://www.coe.int/en/web/portal/-/council-of-europe-adopts-first-international-treaty-on-artificial-intelligence
- https://oecd.ai/en/ai-principles
- https://www.lexology.com/library/detail.aspx?g=1ebddf13-09de-43a0-88bb-1d420a9d31f2
- https://aimagazine.com/articles/what-does-the-worlds-first-international-ai-treaty-include
- https://www.yalejournal.org/publications/global-ai-governance-and-the-united-nations https://www.zdnet.com/article/uns-ai-resolution-is-non-binding-but-still-a-big-deal-heres-why/
- https://news.un.org/en/story/2024/03/1147831
- https://www.reuters.com/technology/artificial-intelligence/us-convene-global-ai-safety-summit-november-2024-09-18/
- https://time.com/7178133/international-network-ai-safety-institutes-convening-gina-raimondo-national-security/
- https://www.reuters.com/technology/artificial-intelligence/south-korea-summit-announces-blueprint-using-ai-military-2024-09-10/