Table of contents

Chapter 1: Classification of AI Systems as High-Risk

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Chapter 2: Requirements for High-Risk AI Systems

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Chapter 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 20: Automatically Generated Logs

Article 21: Corrective Actions and Duty of Information

Article 23: Cooperation with Competent Authorities

Article 25: Authorised Representatives

Article 26: Obligations of Importers

Article 27: Obligations of Distributors

Article 28: Responsibilities Along the AI Value Chain

Article 29: Obligations of Deployers of High-Risk AI Systems

Chapter 4: Notifying Authorities and Notified Bodies

Article 30: Notifying Authorities

Article 31: Application of a Conformity Assessment Body for Notification

Article 32: Notification Procedure

Article 33: Requirements Relating to Notified Bodies

Article 33a: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 34: Subsidiaries of and Subcontracting by Notified Bodies

Article 34a: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies Designated Under this Regulation

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Chapter 5: Standards, Conformity Assessment, Certificates, Registration

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 46: Information Obligations of Notified Bodies

Article 47: Derogation from Conformity Assessment Procedure

Article 48: EU Declaration of Conformity

Article 49: CE Marking of Conformity

Article 51: Registration

Chapter 1: Post-Market Monitoring

Article 61: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Chapter 2: Sharing of Information on Serious Incidents

Article 62: Reporting of Serious Incidents

Chapter 3: Enforcement

Article 63: Market Surveillance and Control of AI Systems in the Union Market

Article 63a: Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems

Article 63b: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 64: Powers of Authorities Protecting Fundamental Rights

Article 65: Procedure for Dealing with AI Systems Presenting a Risk at National Level

Article 65a: Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III

Article 66: Union Safeguard Procedure

Article 67: Compliant AI Systems Which Present a Risk

Article 68: Formal Non-Compliance

Article 68a: EU AI Testing Support Structures in the Area of Artificial Intelligence

Chapter 3b: Remedies

Article 68a(1): Right to Lodge a Complaint with a Market Surveillance Authority

Article 68c: A Right to Explanation of Individual Decision-Making

Article 68d: Amendment to Directive (EU) 2020/1828

Article 68e: Reporting of Breaches and Protection of Reporting Persons

Chapter 3c: Supervision, Investigation, Enforcement and Monitoring in Respect of Providers of General Purpose AI Models

Article 68f: Enforcement of Obligations on Providers of General Purpose AI Models

Article 68g : Monitoring Actions

Article 68h: Alerts of Systemic Risks by the Scientific Panel

Article 68i: Power to Request Documentation and Information

Article 68j: Power to Conduct Evaluations

Article 68k: Power to Request Measures

Article 68m: Procedural Rights of Economic Operators of the General Purpose AI Model

Article 9: Risk Management System

1. A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems.

2. The risk management system shall be understood as a continuous iterative process planned and run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following steps:

(a) identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to the health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;

(b) estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse;

(c) evaluation of other possibly arising risks based on the analysis of data gathered from the post-market monitoring system referred to in Article 61;

(d) adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to point a of this paragraph in accordance with the provisions of the following paragraphs.

2a. The risks referred to in this paragraph shall concern only those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.

3. The risk management measures referred to in paragraph 2, point (d) shall give due consideration to the effects and possible interaction resulting from the combined application of the requirements set out in this Chapter 2, with a view to minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those requirements.

4. The risk management measures referred to in paragraph 2, point (d) shall be such that relevant residual risk associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged to be acceptable. In identifying the most appropriate risk management measures, the following shall be ensured:

(a) elimination or reduction of identified risks and evaluated pursuant to paragraph 2 as far as technically feasible through adequate design and development of the high-risk AI system;

(b) where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be eliminated;

(c) provision of the required information pursuant to Article 13, referred to in paragraph 2, point (b) of this Article, and, where appropriate, training to deployers. With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given to the technical knowledge, experience, education, training to be expected by the deployer and the presumable context in which the system is intended to be used.

5. High-risk AI systems shall be tested for the purposes of identifying the most appropriate and targeted risk management measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and they are in compliance with the requirements set out in this Chapter.

6. Testing procedures may include testing in real world conditions in accordance with Article 54a.

7. The testing of the high-risk AI systems shall be performed, as appropriate, at any point in time throughout the development process, and, in any event, prior to the placing on the market or the putting into service. Testing shall be made against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.

8. When implementing the risk management system described in paragraphs 1 to 6, providers shall give consideration to whether in view of its intended purpose the high-risk AI system is likely to adversely impact persons under the age of 18 and, as appropriate, other vulnerable groups of people.

9. For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes under relevant sectorial Union law, the aspects described in paragraphs 1 to 8 may be part of or combined with the risk management procedures established pursuant to that law.