The EU Artificial Intelligence Act
On 13 March 2024, the European Parliament passed the groundbreaking EU Artificial Intelligence Act (‘AI Act’) with 523 votes in favour. The AI Act is the world’s first horizontal and standalone law governing AI. It plans to create rules for AI in all areas, focusing on the expected risks of these new technologies.
The AI Act introduces a new framework that categorises AI systems based on their potential risks and impacts. The AI Act would establish four levels of risk for AI: unacceptable risk, high risk, limited risk, and minimal risk. Different rules apply depending on the level of risk a system poses to fundamental rights.
The European Commission suggests that those risk categories would work in the following ways:
1. Unacceptable risk
Any AI systems that pose a threat to people’s safety, livelihoods, or rights will be banned. This includes government social scoring and toys with voice assistance that promote risky behavior.
2. High risk
AI systems identified as high-risk include AI technology used in:
- critical infrastructures (eg transport), that could put the life and health of citizens at risk
- educational or vocational training, that may determine the access to education and professional course of someone’s life (eg scoring of exams)
- safety components of products (eg AI application in robot-assisted surgery)
- employment, management of workers and access to self-employment (eg CV-sorting software for recruitment procedures)
- essential private and public services (eg credit scoring denying citizens the opportunity to obtain a loan)
- law enforcement that may interfere with people’s fundamental rights (eg evaluation of the reliability of evidence)
- migration, asylum and border control management (eg verification of the authenticity of travel documents)
- Administration of justice and democratic processes (eg applying the law to a concrete set of facts)
Many strict obligations will apply to high-risk AI systems before they can be put on the market. Some of these obligations are:
- adequate risk assessment and mitigation systems
- high quality of the datasets feeding the system to minimise risks and discriminatory outcomes
- logging of activity to ensure traceability of results
- appropriate human oversight measures to minimise risk
- high level of robustness, security and accuracy
3. Limited risk
Limited risk refers to AI systems with specific transparency obligations. When using AI systems such as chatbots, users should be aware that they are interacting with a machine. This means they can make an informed decision to continue or step back.
4. Minimal or no risk
Free use of minimal-risk AI is allowed. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
What’s next?
On 21st May 2024, the EU Council formally adopted the AI Act paving the way for it to officially become law very soon, followed by a two-year phased implementation period.
The AI ACT will have implications for organisations around the world and will apply not only to EU AI providers and developers but also to those located in other jurisdictions – such as the UK and the US – if their AI systems are marketed or intended to be used in the EU. This has led some to compare the AI Act to the GDPR in its likely impact.
Multinational firms will have to decide whether to adopt AI Act standards globally or to adopt EU-specific AI systems. They will have to begin by assessing which of their current and planned AI systems are likely to fall into the AI definition of the Act and, those, which are high-risk or prohibited.