Table of contents

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 19: deleted

Article 20: Automatically Generated Logs

Article 21: Corrective Actions and Duty of Information

Article 22: deleted

Article 23: Cooperation with Competent Authorities

Article 25: Authorised Representatives

Article 26: Obligations of Importers

Article 27: Obligations of Distributors

Article 28: Responsibilities Along the AI Value Chain

Article 29: Obligations of Deployers of High-Risk AI Systems

Article 29a: Fundamental Rights Impact Assessment for High-Risk AI Systems

Article 30: Notifying Authorities

Article 31: Application of a Conformity Assessment Body for Notification

Article 32: Notification Procedure

Article 33: Requirements Relating to Notified Bodies

Article 33a: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 34: Subsidiaries of and Subcontracting by Notified Bodies

Article 34a: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies Designated Under this Regulation

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 46: Information Obligations of Notified Bodies

Article 47: Derogation from Conformity Assessment Procedure

Article 48: EU Declaration of Conformity

Article 49: CE Marking of Conformity

Article 50: Moved to Article 18

Article 51: Registration

Article 61: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Article 62: Reporting of Serious Incidents

Article 63: Market Surveillance and Control of AI Systems in the Union Market

Article 63a: Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems

Article 63b: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 64: Powers of Authorities Protecting Fundamental Rights

Article 65: Procedure for Dealing with AI Systems Presenting a Risk at National Level

Article 65a: Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III

Article 66: Union Safeguard Procedure

Article 67: Compliant AI Systems Which Present a Risk

Article 68: Formal Non-Compliance

Article 68a: EU AI Testing Support Structures in the Area of Artificial Intelligence

Article 68a(1): Right to Lodge a Complaint with a Market Surveillance Authority

Article 68c: A Right to Explanation of Individual Decision-Making

Article 68d: Amendment to Directive (EU) 2020/1828

Article 68e: Reporting of Breaches and Protection of Reporting Persons

Article 68f: Enforcement of Obligations on Providers of General Purpose AI Models

Article 68g : Monitoring Actions

Article 68h: Alerts of Systemic Risks by the Scientific Panel

Article 68i: Power to Request Documentation and Information

Article 68j: Power to Conduct Evaluations

Article 68k: Power to Request Measures

Article 68m: Procedural Rights of Economic Operators of the General Purpose AI Model

Article 54a: Testing of High-Risk AI Systems in Real World Conditions Outside AI Regulatory Sandboxes

1. Testing of AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with the provisions of this Article and the real world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5. The detailed elements of the real world testing plan shall be specified in implementing acts adopted by the Commission in accordance with the examination procedure referred to in Article 74(2). This provision shall be without prejudice to Union or national law for the testing in real world conditions of high-risk AI systems related to products covered by legislation listed in Annex II.

2. Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in real world conditions at any time before the placing on the market or putting into service of the AI system on their own or in partnership with one or more prospective deployers.

3. The testing of high-risk AI systems in real world conditions under this Article shall be without prejudice to ethical review that may be required by national or Union law.

4. Providers or prospective providers may conduct the testing in real world conditions only where all of the following conditions are met:

(a) the provider or prospective provider has drawn up a real world testing plan and submitted it to the market surveillance authority in the Member State(s) where the testing in real world conditions is to be conducted;

(b) the market surveillance authority in the Member State(s) where the testing in real world conditions is to be conducted has approved the testing in real world conditions and the real world testing plan. Where the market surveillance authority in that Member State has not provided with an answer in 30 days, the testing in real world conditions and the real world testing plan shall be understood as approved. In cases where national law does not foresee a tacit approval, the testing in real world conditions shall be subject to an authorisation;

(c) the provider or prospective provider with the exception of high-risk AI systems referred to in Annex III, points 1, 6 and 7 in the areas of law enforcement, migration, asylum and border control management, and high risk AI systems referred to in Annex III point 2, has registered the testing in real world conditions in the non public part of the EU database referred to in Article 60(3) with a Union-wide unique single identification number and the information specified in Annex VIIIa;

(d) the provider or prospective provider conducting the testing in real world conditions is established in the Union or it has appointed a legal representative who is established in the Union;

(e) Data collected and processed for the purpose of the testing in real world conditions shall only be transferred to third countries outside the Union provided appropriate and applicable safeguards under Union law are implemented;

(f) the testing in real world conditions does not last longer than necessary to achieve its objectives and in any case not longer than 6 months, which may be extended for an additional amount of 6 months, subject to prior notification by the provider to the market surveillance authority, accompanied by an explanation on the need for such time extension;

(g) persons belonging to vulnerable groups due to their age, physical or mental disability are appropriately protected;

(h) where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their decision to participate, and given the relevant instructions on how to use the AI system referred to in Article 13; the provider or prospective provider and the deployer(s) shall conclude an agreement specifying their roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions under this Regulation and other applicable Union and Member States legislation;

(i) the subjects of the testing in real world conditions have given informed consent in accordance with Article 54b, or in the case of law enforcement, where the seeking of informed consent would prevent the AI system from being tested, the testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the subject and his or her personal data shall be deleted after the test is performed;

(j) the testing in real world conditions is effectively overseen by the provider or prospective provider and deployer(s) with persons who are suitably qualified in the relevant field and have the necessary capacity, training and authority to perform their tasks;

(k) the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded. 5 Any subject of the testing in real world conditions, or his or her legally designated representative, as appropriate, may, without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by revoking his or her informed consent and request the immediate and permanent deletion of their personal data. The withdrawal of the informed consent shall not affect the activities already carried out.

5a. In accordance with Article 63a, Member States shall confer their market surveillance authorities the powers of requiring providers and prospective providers information, of carrying out unannounced remote or on-site inspections and on performing checks on the development of the testing in real world conditions and the related products. Market surveillance authorities shall use these powers to ensure a safe development of these tests.

6. Any serious incident identified in the course of the testing in real world conditions shall be reported to the national market surveillance authority in accordance with Article 62 of this Regulation. The provider or prospective provider shall adopt immediate mitigation measures or, failing that, suspend the testing in real world conditions until such mitigation takes place or otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI system upon such termination of the testing in real world conditions.

7. Providers or prospective providers shall notify the national market surveillance authority in the Member State(s) where the testing in real world conditions is to be conducted of the suspension or termination of the testing in real world conditions and the final outcomes.

8. The provider and prospective provider shall be liable under applicable Union and Member States liability legislation for any damage caused in the course of their participation in the testing in real world conditions. Article 54b Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes.

1. For the purpose of testing in real world conditions under Article 54a, informed consent shall be freely given by the subject of testing prior to his or her participation in such testing and after having been duly informed with concise, clear, relevant, and understandable information regarding:

(i) the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to his or her participation;

(ii) the conditions under which the testing in real world conditions is to be conducted, including the expected duration of the subject’s participation;

(iii) the subject’s rights and guarantees regarding participation, in particular his or her right to refuse to participate in and the right to withdraw from testing in real world conditions at any time without any resulting detriment and without having to provide any justification;

(iv) the modalities for requesting the reversal or the disregard of the predictions, recommendations or decisions of the AI system;

(v) the Union-wide unique single identification number of the testing in real world conditions in accordance with Article 54a(4c) and the contact details of the provider or its legal representative from whom further information can be obtained.

2. The informed consent shall be dated and documented and a copy shall be given to the subject or his or her legal representative.