Table of contents

Section 1: Classification of AI Systems as High-Risk

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Section 2: Requirements for High-Risk AI Systems

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Section 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 19: Automatically Generated Logs

Article 20: Corrective Actions and Duty of Information

Article 21: Cooperation with Competent Authorities

Article 22: Authorised Representatives of providers of high-risk AI systems

Article 23: Obligations of Importers

Article 24: Obligations of Distributors

Article 25: Responsibilities Along the AI Value Chain

Article 26: Obligations of Deployers of High-Risk AI Systems

Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems

Section 4: Notifying Authorities and Notified Bodies

Article 28: Notifying Authorities

Article 29: Application of a Conformity Assessment Body for Notification

Article 30: Notification Procedure

Article 31: Requirements Relating to Notified Bodies

Article 32: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 33: Subsidiaries of and Subcontracting by Notified Bodies

Article 34: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies Designated Under this Regulation

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Section 5: Standards, Conformity Assessment, Certificates, Registration

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 45: Information Obligations of Notified Bodies

Article 46: Derogation from Conformity Assessment Procedure

Article 47: EU Declaration of Conformity

Article 48: CE Marking

Article 49: Registration

Section 1: Post-Market Monitoring

Article 72: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Section 2: Sharing of Information on Serious Incidents

Article 73: Reporting of Serious Incidents

Section 3: Enforcement

Article 74: Market Surveillance and Control of AI Systems in the Union Market

Article 75: Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems

Article 76: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 77: Powers of Authorities Protecting Fundamental Rights

Article 78: Confidentiality

Article 79: Procedure for Dealing with AI Systems Presenting a Risk at National Level

Article 80: Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III

Article 81: Union Safeguard Procedure

Article 82: Compliant AI Systems Which Present a Risk

Article 83: Formal Non-Compliance

Article 84: Union AI Testing Support Structures

Section 4: Remedies

Article 85: Right to Lodge a Complaint with a Market Surveillance Authority

Article 86: A Right to Explanation of Individual Decision-Making

Article 87: Reporting of Breaches and Protection of Reporting Persons

Section 5: Supervision, Investigation, Enforcement and Monitoring in Respect of Providers of General Purpose AI Models

Article 88: Enforcement of Obligations on Providers of General Purpose AI Models

Article 89 : Monitoring Actions

Article 90: Alerts of Systemic Risks by the Scientific Panel

Article 91: Power to Request Documentation and Information

Article 92: Power to Conduct Evaluations

Article 93: Power to Request Measures

Article 94: Procedural Rights of Economic Operators of the General Purpose AI Model

Annexes

Search within the Act

Article 6: Classification Rules for High-Risk AI Systems

Feedback – We are working to improve this tool. Please send feedback to Risto Uuk at risto@futureoflife.org

1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in points (a) and (b), that AI system shall be considered high-risk where both of the following conditions are fulfilled:

(a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I;

(b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third party conformity assessment, with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I.

2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk.

2a. By derogation from paragraph 2 AI systems shall not be considered as high risk if they do not pose a significant risk of harm, to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. This shall be the case if one or more of the following criteria are fulfilled:

(a) the AI system is intended to perform a narrow procedural task;

(b) the AI system is intended to improve the result of a previously completed human activity;

(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or

(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III. Notwithstanding first subparagraph of this paragraph, an AI system shall always be considered high-risk if the AI system performs profiling of natural persons.

2b. A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation set out in Article 49(1a). Upon request of national competent authorities, the provider shall provide the documentation of the assessment.

2c. The Commission shall, after consulting the AI Board, and no later than [18 months] after the entry into force of this Regulation, provide guidelines specifying the practical implementation of this article completed by a comprehensive list of practical examples of high risk and non-high risk use cases on AI systems pursuant to Article 82b.

2d. The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend the criteria laid down in points a) to d) of the first subparagraph of paragraph 2a. The Commission may adopt delegated acts adding new criteria to those laid down in points a) to d) of the first subparagraph of paragraph 2a, or modifying them, only where there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III but that do not pose a significant risk of harm to the health, safety and fundamental rights. The Commission shall adopt delegated acts deleting any of the criteria laid down in the first subparagraph of paragraph 2a where there is concrete and reliable evidence that this is necessary for the purpose of maintaining the level of protection of health, safety and fundamental rights in the Union. Any amendment to the criteria laid down in points a) to d) set out in the first subparagraph of paragraph 2a shall not decrease the overall level of protection of health, safety and fundamental rights in the Union. When adopting the delegated acts, the Commission shall ensure consistency with the delegated acts adopted pursuant to Article 7(1) and shall take account of market and technological developments.

The text used in this tool is the ‘Artificial Intelligence Act, Text of the provisional agreement, 2 February 2024’, which was officially announced in this press release. Interinstitutional File: 2021/0106(COD)

Where new Articles have been added in later versions, and important fixes have been made, these changes have been replicated in the AI Act Explorer too. Also, in the 'Texts Adopted' from March 2024, many items were re-named or re-numbered, and items in the AI Act Explorer were updated accordingly. This means the content on our site no longer matches exactly with any official AI Act draft – it is a hybrid of multiple drafts. If you find an important update in a later draft that has not been updated in our tool, please let us know.

When a final draft is available, all of the text in the AI Act Explorer will be updated to reflect it.