Table of contents

Section 1: Classification of AI Systems as High-Risk

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Section 2: Requirements for High-Risk AI Systems

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Section 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 19: Automatically Generated Logs

Article 20: Corrective Actions and Duty of Information

Article 21: Cooperation with Competent Authorities

Article 22: Authorised Representatives of Providers of High-Risk AI Systems

Article 23: Obligations of Importers

Article 24: Obligations of Distributors

Article 25: Responsibilities Along the AI Value Chain

Article 26: Obligations of Deployers of High-Risk AI Systems

Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems

Section 4: Notifying Authorities and Notified Bodies

Article 28: Notifying Authorities

Article 29: Application of a Conformity Assessment Body for Notification

Article 30: Notification Procedure

Article 31: Requirements Relating to Notified Bodies

Article 32: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 33: Subsidiaries of Notified Bodies and Subcontracting

Article 34: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Section 5: Standards, Conformity Assessment, Certificates, Registration

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 45: Information Obligations of Notified Bodies

Article 46: Derogation from Conformity Assessment Procedure

Article 47: EU Declaration of Conformity

Article 48: CE Marking

Article 49: Registration

Section 1: Post-Market Monitoring

Article 72: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Section 2: Sharing of Information on Serious Incidents

Article 73: Reporting of Serious Incidents

Section 3: Enforcement

Article 74: Market Surveillance and Control of AI Systems in the Union Market

Article 75: Mutual Assistance, Market Surveillance and Control of General-Purpose AI Systems

Article 76: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 77: Powers of Authorities Protecting Fundamental Rights

Article 78: Confidentiality

Article 79: Procedure at National Level for Dealing with AI Systems Presenting a Risk

Article 80: Procedure for Dealing with AI Systems Classified by the Provider as Non-High-Risk in Application of Annex III

Article 81: Union Safeguard Procedure

Article 82: Compliant AI Systems Which Present a Risk

Article 83: Formal Non-Compliance

Article 84: Union AI Testing Support Structures

Section 4: Remedies

Article 85: Right to Lodge a Complaint with a Market Surveillance Authority

Article 86: Right to Explanation of Individual Decision-Making

Article 87: Reporting of Infringements and Protection of Reporting Persons

Section 5: Supervision, Investigation, Enforcement and Monitoring in Respect of Providers of General-Purpose AI Models

Article 88: Enforcement of the Obligations of Providers of General-Purpose AI Models

Article 89 : Monitoring Actions

Article 90: Alerts of Systemic Risks by the Scientific Panel

Article 91: Power to Request Documentation and Information

Article 92: Power to Conduct Evaluations

Article 93: Power to Request Measures

Article 94: Procedural Rights of Economic Operators of the General-Purpose AI Model

Recitals

Annexes

Search within the Act

Annex XI: Technical Documentation Referred to in Article 53(1), Point (a) – Technical Documentation for Providers of General-Purpose AI Models

Summary

Article 53 requires providers of general-purpose AI models to provide detailed technical documentation. This Annex specifies what documentation is required, including a general description of the AI model, its intended tasks, acceptable use policies, release date, distribution methods, architecture, and license. It should also detail the model's design, training process, data used, and energy consumption. For AI models with systemic risk, additional information is required, including evaluation strategies, adversarial testing measures, and a description of the system architecture. This is to ensure transparency and accountability in the use of AI systems.

Generated by CLaiRK, edited by us.

NOTE: This translation is a machine-generated translation. It is not the official translation provided by the European Parliament. When the AI Act is published in the official journal, the machine-generated translations will be replaced by the official translations.

Section 1 – Information to be provided by all providers of general-purpose AI models

The technical documentation referred to in Article 53(1), point (a) shall contain at least the following information as appropriate to the size and risk profile of the model:

1. A general description of the general-purpose AI model including:

(a) the tasks that the model is intended to perform and the type and nature of AI systems in which it can be integrated;

(b) the acceptable use policies applicable;

(c) the date of release and methods of distribution;

(d) the architecture and number of parameters;

(e) the modality (e.g. text, image) and format of inputs and outputs;

(f) the licence.

2. A detailed description of the elements of the model referred to in point 1, and relevant information of the process for the development, including the following elements:

(a) the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI model to be integrated in AI systems;

(b) the design specifications of the model and training process, including training methodologies and techniques, the key design choices including the rationale and assumptions made; what the model is designed to optimise for and the relevance of the different parameters, as applicable;

(c) information on the data used for training, testing and validation, where applicable, including the type and provenance of data and curation methodologies (e.g. cleaning, filtering etc.), the number of data points, their scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the unsuitability of data sources and methods to detect identifiable biases, where applicable;

(d) the computational resources used to train the model (e.g. number of floating point operations ), training time, and other relevant details related to the training;

(e) known or estimated energy consumption of the model. With regard to point (e), where the energy consumption of the model is unknown, the energy consumption may be based on information about computational resources used.

Section 2 – Additional information to be provided by providers of general-purpose AI models with systemic risk

1. A detailed description of the evaluation strategies, including evaluation results, on the basis of available public evaluation protocols and tools or otherwise of other evaluation methodologies. Evaluation strategies shall include evaluation criteria, metrics and the methodology on the identification of limitations.

2. Where applicable, a detailed description of the measures put in place for the purpose of conducting internal and/or external adversarial testing (e.g. red teaming), model adaptations, including alignment and fine-tuning.

3. Where applicable, a detailed description of the system architecture explaining how software components build or feed into each other and integrate into the overall processing.

Feedback – We are working to improve this tool. Please send feedback to Taylor Jones at taylor@futureoflife.org

The text used in this tool is the ‘Artificial Intelligence Act (Regulation (EU) 2024/1689), Official Journal version of 13 June 2024’. Interinstitutional File: 2021/0106(COD)