Table of contents

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 19: deleted

Article 20: Automatically Generated Logs

Article 21: Corrective Actions and Duty of Information

Article 22: deleted

Article 23: Cooperation with Competent Authorities

Article 25: Authorised Representatives

Article 26: Obligations of Importers

Article 27: Obligations of Distributors

Article 28: Responsibilities Along the AI Value Chain

Article 29: Obligations of Deployers of High-Risk AI Systems

Article 29a: Fundamental Rights Impact Assessment for High-Risk AI Systems

Article 30: Notifying Authorities

Article 31: Application of a Conformity Assessment Body for Notification

Article 32: Notification Procedure

Article 33: Requirements Relating to Notified Bodies

Article 33a: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 34: Subsidiaries of and Subcontracting by Notified Bodies

Article 34a: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies Designated Under this Regulation

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 46: Information Obligations of Notified Bodies

Article 47: Derogation from Conformity Assessment Procedure

Article 48: EU Declaration of Conformity

Article 49: CE Marking of Conformity

Article 50: Moved to Article 18

Article 51: Registration

Article 61: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Article 62: Reporting of Serious Incidents

Article 63: Market Surveillance and Control of AI Systems in the Union Market

Article 63a: Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems

Article 63b: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 64: Powers of Authorities Protecting Fundamental Rights

Article 65: Procedure for Dealing with AI Systems Presenting a Risk at National Level

Article 65a: Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III

Article 66: Union Safeguard Procedure

Article 67: Compliant AI Systems Which Present a Risk

Article 68: Formal Non-Compliance

Article 68a: EU AI Testing Support Structures in the Area of Artificial Intelligence

Article 68a(1): Right to Lodge a Complaint with a Market Surveillance Authority

Article 68c: A Right to Explanation of Individual Decision-Making

Article 68d: Amendment to Directive (EU) 2020/1828

Article 68e: Reporting of Breaches and Protection of Reporting Persons

Article 68f: Enforcement of Obligations on Providers of General Purpose AI Models

Article 68g : Monitoring Actions

Article 68h: Alerts of Systemic Risks by the Scientific Panel

Article 68i: Power to Request Documentation and Information

Article 68j: Power to Conduct Evaluations

Article 68k: Power to Request Measures

Article 68m: Procedural Rights of Economic Operators of the General Purpose AI Model

Annex IV: Technical Documentation Referred to in Article 11(1)

The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to the relevant AI system:

1. A general description of the AI system including:

(a) its intended purpose, the name of the provider and the version of the system reflecting its relation to previous versions;

(b) how the AI system interacts or can be used to interact with hardware or software, including other AI systems, that are not part of the AI system itself, where applicable;

(c) the versions of relevant software or firmware and any requirement related to version update;

(d) the description of all forms in which the AI system is placed on the market or put into service (e.g. software package embedded into hardware, downloadable, API etc.);

(e) the description of hardware on which the AI system is intended to run;

(f) where the AI system is a component of products, photographs or illustrations showing external features, marking and internal layout of those products;

(fa) a basic description of the user-interface provided to the deployer;

(g) instructions of use for the deployer and a basic description of the user-interface provided to the deployer where applicable.

2. A detailed description of the elements of the AI system and of the process for its development, including:

(a) the methods and steps performed for the development of the AI system, including, where relevant, recourse to pre-trained systems or tools provided by third parties and how these have been used, integrated or modified by the provider;

(b) the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key design choices including the rationale and assumptions made, also with regard to persons or groups of persons on which the system is intended to be used; the main classification choices; what the system is designed to optimise for and the relevance of the different parameters; the description of the expected output and output quality of the system; the decisions about any possible trade-off made regarding the technical solutions adopted to comply with the requirements set out in Title III, Chapter 2;

(c) the description of the system architecture explaining how software components build on or feed into each other and integrate into the overall processing; the computational resources used to develop, train, test and validate the AI system;

(d) where relevant, the data requirements in terms of datasheets describing the training methodologies and techniques and the training data sets used, including a general description of these data sets, information about their provenance, scope and main characteristics; how the data was obtained and selected; labelling procedures (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);

(e) assessment of the human oversight measures needed in accordance with Article 14, including an assessment of the technical measures needed to facilitate the interpretation of the outputs of AI systems by the deployers, in accordance with Articles 13(3)(d);

(f) where applicable, a detailed description of pre-determined changes to the AI system and its performance, together with all the relevant information related to the technical solutions adopted to ensure continuous compliance of the AI system with the relevant requirements set out in Title III, Chapter 2;

(g) the validation and testing procedures used, including information about the validation and testing data used and their main characteristics; metrics used to measure accuracy, robustness and compliance with other relevant requirements set out in Title III, Chapter 2 as well as potentially discriminatory impacts; test logs and all test reports dated and signed by the responsible persons, including with regard to predetermined changes as referred to under point (f);

(ga) cybersecurity measures put in place.

3. Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to:

(#) its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of persons on which the system is intended to be used and the overall expected level of accuracy in relation to its intended purpose;

(#) the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights and discrimination in view of the intended purpose of the AI system;

(#) the human oversight measures needed in accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the outputs of AI systems by the deployers;

(#) specifications on input data, as appropriate.

3. A description of the appropriateness of the performance metrics for the specific AI system.

4. A detailed description of the risk management system in accordance with Article 9.

5. A description of relevant changes made by the provider to the system through its lifecycle.

6. A list of the harmonised standards applied in full or in part the references of which have been published in the Official Journal of the European Union; where no such harmonised standards have been applied, a detailed description of the solutions adopted to meet the requirements set out in Title III, Chapter 2, including a list of other relevant standards and technical specifications applied.

7. A copy of the EU declaration of conformity.

8. A detailed description of the system in place to evaluate the AI system performance in the post-market phase in accordance with Article 61, including the post-market monitoring plan referred to in Article 61(3).