Table of contents

Chapter 1: Classification of AI Systems as High-Risk

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Chapter 2: Requirements for High-Risk AI Systems

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Chapter 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 20: Automatically Generated Logs

Article 21: Corrective Actions and Duty of Information

Article 23: Cooperation with Competent Authorities

Article 25: Authorised Representatives

Article 26: Obligations of Importers

Article 27: Obligations of Distributors

Article 28: Responsibilities Along the AI Value Chain

Article 29: Obligations of Deployers of High-Risk AI Systems

Chapter 4: Notifying Authorities and Notified Bodies

Article 30: Notifying Authorities

Article 31: Application of a Conformity Assessment Body for Notification

Article 32: Notification Procedure

Article 33: Requirements Relating to Notified Bodies

Article 33a: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 34: Subsidiaries of and Subcontracting by Notified Bodies

Article 34a: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies Designated Under this Regulation

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Chapter 5: Standards, Conformity Assessment, Certificates, Registration

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 46: Information Obligations of Notified Bodies

Article 47: Derogation from Conformity Assessment Procedure

Article 48: EU Declaration of Conformity

Article 49: CE Marking of Conformity

Article 51: Registration

Chapter 1: Post-Market Monitoring

Article 61: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Chapter 2: Sharing of Information on Serious Incidents

Article 62: Reporting of Serious Incidents

Chapter 3: Enforcement

Article 63: Market Surveillance and Control of AI Systems in the Union Market

Article 63a: Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems

Article 63b: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 64: Powers of Authorities Protecting Fundamental Rights

Article 65: Procedure for Dealing with AI Systems Presenting a Risk at National Level

Article 65a: Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III

Article 66: Union Safeguard Procedure

Article 67: Compliant AI Systems Which Present a Risk

Article 68: Formal Non-Compliance

Article 68a: EU AI Testing Support Structures in the Area of Artificial Intelligence

Chapter 3b: Remedies

Article 68a(1): Right to Lodge a Complaint with a Market Surveillance Authority

Article 68c: A Right to Explanation of Individual Decision-Making

Article 68d: Amendment to Directive (EU) 2020/1828

Article 68e: Reporting of Breaches and Protection of Reporting Persons

Chapter 3c: Supervision, Investigation, Enforcement and Monitoring in Respect of Providers of General Purpose AI Models

Article 68f: Enforcement of Obligations on Providers of General Purpose AI Models

Article 68g : Monitoring Actions

Article 68h: Alerts of Systemic Risks by the Scientific Panel

Article 68i: Power to Request Documentation and Information

Article 68j: Power to Conduct Evaluations

Article 68k: Power to Request Measures

Article 68m: Procedural Rights of Economic Operators of the General Purpose AI Model

Article 17: Quality Management System

1. Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, and shall include at least the following aspects:

(a) a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for the management of modifications to the high-risk AI system;

(b) techniques, procedures and systematic actions to be used for the design, design control and design verification of the high-risk AI system;

(c) techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of the high-risk AI system;

(d) examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system, and the frequency with which they have to be carried out;

(e) technical specifications, including standards, to be applied and, where the relevant harmonised standards are not applied in full, or do not cover all of the relevant requirements set out in Chapter II of this Title, the means to be used to ensure that the high-risk AI system complies with those requirements;

(f) systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data that is performed before and for the purposes of the placing on the market or putting into service of high-risk AI systems;

(g) the risk management system referred to in Article 9;

(h) the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 61;

(i) procedures related to the reporting of a serious incident in accordance with Article 62;

(j) the handling of communication with national competent authorities, other relevant authorities, including those providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;

(k) systems and procedures for record keeping of all relevant documentation and information;

(l) resource management, including security of supply related measures;

(m) an accountability framework setting out the responsibilities of the management and other staff with regard to all aspects listed in this paragraph.

2. The implementation of aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s organisation. Providers shall in any event respect the degree of rigour and the level of protection required to ensure compliance of their AI systems with this Regulation.

2a. For providers of high-risk AI systems that are subject to obligations regarding quality management systems or their equivalent function under relevant sectorial Union law, the aspects described in paragraph 1 may be part of the quality management systems pursuant to that law.

3. For providers that are financial institutions subject to requirements regarding their internal governance, arrangements or processes under Union financial services legislation, the obligation to put in place a quality management system with the exception of paragraph 1, points (g), (h) and (i) shall be deemed to be fulfilled by complying with the rules on internal governance arrangements or processes pursuant to the relevant Union financial services legislation. In that context, any harmonised standards referred to in Article 40 of this Regulation shall be taken into account.