Authors:
Paper:
https://arxiv.org/abs/2408.08318
Introduction
The European Union (EU) has recently adopted a uniform legal framework applicable to artificial intelligence systems (AI systems) within its internal market. The Regulation (EU) 2024/1689, known as the “Artificial Intelligence Act” (AI Act), is a landmark piece of legislation that sets binding regulatory requirements for key operators, including public authorities, in the global value chain of high-risk AI systems marketed or used within the EU. This blog provides a detailed analysis of the AI Act, focusing on its scope, compliance regimes, and multi-level governance structure.
I) The Scope of the AI Act
A) The Taxonomy of AI Systems and Models
The AI Act establishes a complex taxonomy of AI systems, which includes prohibited practices, high-risk AI systems, and a special category for general-purpose AI models (GPAIMs).
1) Prohibited AI Practices
Article 5, §1 of the AI Act lists eight prohibited AI practices, such as systems capable of emotion recognition or influencing human consent, social scoring, criminal prediction, and biometric systems. These prohibitions are based on their incompatibility with EU values, including human dignity and freedom. However, these prohibitions are subject to numerous exceptions, particularly in law enforcement.
2) High-Risk AI Systems
Article 6 of the AI Act identifies two categories of high-risk AI systems. The first category includes systems governed by specific EU harmonization legislation listed in Annex I, which require third-party conformity assessment. The second category, detailed in Annex III, covers eight domains with 25 use cases, such as biometric identification, critical infrastructure, law enforcement, education, and employment.
3) General-Purpose AI Models (GPAIMs)
GPAIMs, introduced late in the negotiations, are defined by their ability to perform a wide range of tasks. These models, such as OpenAI’s GPT-3 and GPT-4, can pose systemic risks due to their significant impact on society. The AI Act requires GPAIM providers to notify the European Commission and implement risk management policies.
B) The Market for AI Systems and Models
1) Material Scope
The AI Act comprises three main normative bodies: harmonized rules for marketing and using AI systems, a public enforcement framework, and measures to support innovation, such as regulatory sandboxes and support for SMEs and startups. Certain high-risk AI systems are excluded from the AI Act, such as those used in national security and defense.
2) Spatial Scope
The AI Act has an extraterritorial reach, applying to non-EU providers and deployers if their AI systems are used within the EU. This ensures a level playing field and protects EU citizens’ rights and freedoms. Non-EU providers must appoint an EU-based representative.
II) Compliance Regimes for AI Systems and Models
A) Compliance Regime for High-Risk AI Systems
1) Essential Requirements
High-risk AI systems must meet essential requirements codified in Chapter III, Section 2 of the AI Act. These include data governance, traceability, transparency, human oversight, accuracy, robustness, and cybersecurity. These requirements will be translated into technical standards known as “harmonized standards.”
2) Obligations of Operators
The AI Act imposes obligations on all operators in the value chain of high-risk AI systems, including suppliers, deployers, representatives, importers, and distributors. Suppliers must ensure compliance throughout the system’s lifecycle, maintain risk management systems, and cooperate with market surveillance authorities. Deployers must adhere to transparency and human oversight requirements and conduct impact assessments on fundamental rights.
B) Compliance Regime for GPAIMs
GPAIMs are subject to a multi-level compliance regime, depending on whether they are distributed under an open-source license and whether they pose systemic risks. Providers must document their models, notify the European Commission, and implement risk management policies.
III) Multi-Level Governance of AI Systems
A) Decentralized National Governance Based on Market Surveillance
Each EU Member State must designate at least one notifying authority and one market surveillance authority. These authorities are responsible for evaluating, designating, notifying, and controlling conformity assessment bodies and regulating the domestic market for high-risk AI systems.
B) European Governance Framework for AI
The AI Act establishes a European governance framework comprising the AI Office within the European Commission, the European AI Committee, and an expert advisory body. The AI Office oversees GPAIMs and coordinates the implementation of the AI Act. The European AI Committee supports the Commission and Member States, promoting consistent application and international cooperation.
Conclusion
The effective implementation of the AI Act requires a balanced and coherent interpretation of its provisions, considering the sometimes conflicting interests of innovation, freedom of enterprise, protection of rights and freedoms, strategic autonomy, and internal security. The EU must uphold its regulatory leadership guided by its foundational values.
This blog provides a comprehensive overview of the AI Act, highlighting its scope, compliance regimes, and governance structure. The AI Act represents a significant step towards establishing a global standard for trustworthy AI, balancing innovation with the protection of fundamental rights and societal values.