Table of contents

Chapter 1: Classification of AI Systems as High-Risk

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Chapter 2: Requirements for High-Risk AI Systems

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Chapter 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 20: Automatically Generated Logs

Article 21: Corrective Actions and Duty of Information

Article 23: Cooperation with Competent Authorities

Article 25: Authorised Representatives

Article 26: Obligations of Importers

Article 27: Obligations of Distributors

Article 28: Responsibilities Along the AI Value Chain

Article 29: Obligations of Deployers of High-Risk AI Systems

Chapter 4: Notifying Authorities and Notified Bodies

Article 30: Notifying Authorities

Article 31: Application of a Conformity Assessment Body for Notification

Article 32: Notification Procedure

Article 33: Requirements Relating to Notified Bodies

Article 33a: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 34: Subsidiaries of and Subcontracting by Notified Bodies

Article 34a: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies Designated Under this Regulation

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Chapter 5: Standards, Conformity Assessment, Certificates, Registration

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 46: Information Obligations of Notified Bodies

Article 47: Derogation from Conformity Assessment Procedure

Article 48: EU Declaration of Conformity

Article 49: CE Marking of Conformity

Article 51: Registration

Chapter 1: Post-Market Monitoring

Article 61: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Chapter 2: Sharing of Information on Serious Incidents

Article 62: Reporting of Serious Incidents

Chapter 3: Enforcement

Article 63: Market Surveillance and Control of AI Systems in the Union Market

Article 63a: Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems

Article 63b: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 64: Powers of Authorities Protecting Fundamental Rights

Article 65: Procedure for Dealing with AI Systems Presenting a Risk at National Level

Article 65a: Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III

Article 66: Union Safeguard Procedure

Article 67: Compliant AI Systems Which Present a Risk

Article 68: Formal Non-Compliance

Article 68a: EU AI Testing Support Structures in the Area of Artificial Intelligence

Chapter 3b: Remedies

Article 68a(1): Right to Lodge a Complaint with a Market Surveillance Authority

Article 68c: A Right to Explanation of Individual Decision-Making

Article 68d: Amendment to Directive (EU) 2020/1828

Article 68e: Reporting of Breaches and Protection of Reporting Persons

Chapter 3c: Supervision, Investigation, Enforcement and Monitoring in Respect of Providers of General Purpose AI Models

Article 68f: Enforcement of Obligations on Providers of General Purpose AI Models

Article 68g : Monitoring Actions

Article 68h: Alerts of Systemic Risks by the Scientific Panel

Article 68i: Power to Request Documentation and Information

Article 68j: Power to Conduct Evaluations

Article 68k: Power to Request Measures

Article 68m: Procedural Rights of Economic Operators of the General Purpose AI Model

Article 52: Transparency Obligations for Providers and Users of Certain AI Systems and GPAI Models

1. Providers shall ensure that AI systems intended to directly interact with natural persons are designed and developed in such a way that the concerned natural persons are informed that they are interacting with an AI system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties unless those systems are available for the public to report a criminal offence.

1a. Providers of AI systems, including GPAI systems, generating synthetic audio, image, video or text content, shall ensure the outputs of the AI system are marked in a machinereadable format and detectable as artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and reliable as far as this is technically feasible, taking into account specificities and limitations of different types of content, costs of implementation and the generally acknowledged state-of-the-art, as may be reflected in relevant technical standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to detect, prevent, investigate and prosecute criminal offences.

2. Deployers of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto and process the personal data in accordance with Regulation (EU) 2016/679, Regulation (EU) 2016/1725 and Directive (EU) 2016/280, as applicable. This obligation shall not apply to AI systems used for biometric categorization and emotion recognition, which are permitted by law to detect, prevent and investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in compliance with Union law.

3. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offence. Where the content forms part of an evidently artistic, creative, satirical, fictional analogous work or programme, the transparency obligations set out in this paragraph are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not hamper the display or enjoyment of the work. Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content.

3a. The information referred to in paragraphs 1 to 3 shall be provided to the concerned natural persons in a clear and distinguishable manner at the latest at the time of the first interaction or exposure. The information shall respect the applicable accessibility requirements.

4. Paragraphs 1, 2 and 3 shall not affect the requirements and obligations set out in Title III of this Regulation and shall be without prejudice to other transparency obligations for users of AI systems laid down in Union or national law.

4a. The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. The Commission is empowered to adopt implementing acts to approve these codes of practice in accordance with the procedure laid down in Article 52e paragraphs 6-8. If it deems the code is not adequate, the Commission is empowered to adopt an implementing act specifying the common rules for the implementation of those obligations in accordance with the examination procedure laid down in Article 73 paragraph 2.