Table of contents

Section 1: Classification of AI Systems as High-Risk

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Section 2: Requirements for High-Risk AI Systems

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Section 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 19: Automatically Generated Logs

Article 20: Corrective Actions and Duty of Information

Article 21: Cooperation with Competent Authorities

Article 22: Authorised Representatives of Providers of High-Risk AI Systems

Article 23: Obligations of Importers

Article 24: Obligations of Distributors

Article 25: Responsibilities Along the AI Value Chain

Article 26: Obligations of Deployers of High-Risk AI Systems

Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems

Section 4: Notifying Authorities and Notified Bodies

Article 28: Notifying Authorities

Article 29: Application of a Conformity Assessment Body for Notification

Article 30: Notification Procedure

Article 31: Requirements Relating to Notified Bodies

Article 32: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 33: Subsidiaries of Notified Bodies and Subcontracting

Article 34: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Section 5: Standards, Conformity Assessment, Certificates, Registration

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 45: Information Obligations of Notified Bodies

Article 46: Derogation from Conformity Assessment Procedure

Article 47: EU Declaration of Conformity

Article 48: CE Marking

Article 49: Registration

Section 1: Post-Market Monitoring

Article 72: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Section 2: Sharing of Information on Serious Incidents

Article 73: Reporting of Serious Incidents

Section 3: Enforcement

Article 74: Market Surveillance and Control of AI Systems in the Union Market

Article 75: Mutual Assistance, Market Surveillance and Control of General-Purpose AI Systems

Article 76: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 77: Powers of Authorities Protecting Fundamental Rights

Article 78: Confidentiality

Article 79: Procedure at National Level for Dealing with AI Systems Presenting a Risk

Article 80: Procedure for Dealing with AI Systems Classified by the Provider as Non-High-Risk in Application of Annex III

Article 81: Union Safeguard Procedure

Article 82: Compliant AI Systems Which Present a Risk

Article 83: Formal Non-Compliance

Article 84: Union AI Testing Support Structures

Section 4: Remedies

Article 85: Right to Lodge a Complaint with a Market Surveillance Authority

Article 86: Right to Explanation of Individual Decision-Making

Article 87: Reporting of Infringements and Protection of Reporting Persons

Section 5: Supervision, Investigation, Enforcement and Monitoring in Respect of Providers of General-Purpose AI Models

Article 88: Enforcement of the Obligations of Providers of General-Purpose AI Models

Article 89 : Monitoring Actions

Article 90: Alerts of Systemic Risks by the Scientific Panel

Article 91: Power to Request Documentation and Information

Article 92: Power to Conduct Evaluations

Article 93: Power to Request Measures

Article 94: Procedural Rights of Economic Operators of the General-Purpose AI Model

Recitals

Annexes

Search within the Act

Article 25: Responsibilities Along the AI Value Chain

Date of entry into force:

2 August 2026

According to:

Article 113

See here for a full implementation timeline.

Summary

This article states that anyone who distributes, imports, deploys, or modifies a high-risk AI system is considered a provider of that system and must follow certain regulations. This includes putting their name on an existing system, making significant changes to a system, or altering the purpose of a system to make it high-risk. The original provider of the system must cooperate with the new provider and provide necessary information and technical access. However, this doesn't apply if the original provider specified that their system shouldn't be changed into a high-risk system. The article also states that the manufacturer of a product that includes a high-risk AI system is considered the provider of that system. Finally, the provider of a high-risk AI system and any third party that supplies components for that system must agree in writing on the necessary information and technical access. This doesn't apply to third parties that provide tools or components under a free and open-source license. The AI Office may develop voluntary contract terms for these situations. The article also emphasizes the need to protect intellectual property rights and trade secrets.

Generated by CLaiRK, edited by us.

NOTE: This translation is a machine-generated translation. It is not the official translation provided by the European Parliament. When the AI Act is published in the official journal, the machine-generated translations will be replaced by the official translations.

1. Any distributor, importer, deployer or other third-party shall be considered to be a provider of a high-risk AI system for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances:

(a) they put their name or trademark on a high-risk AI system already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the obligations are otherwise allocated;

(b) they make a substantial modification to a high-risk AI system that has already been placed on the market or has already been put into service in such a way that it remains a high-risk AI system pursuant to Article 6;

(c) they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned becomes a high-risk AI system in accordance with Article 6. Related: Recital 85

Related: Recital 84

2. Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the documentation. Related: Recital 86

3. In the case of high-risk AI systems that are safety components of products covered by the Union harmonisation legislation listed in Section A of Annex I, the product manufacturer shall be considered to be the provider of the high-risk AI system, and shall be subject to the obligations under Article 16 under either of the following circumstances:

(a) the high-risk AI system is placed on the market together with the product under the name or trademark of the product manufacturer;

(b) the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product has been placed on the market.

Related: Recital 87

4. The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall not apply to third parties making accessible to the public tools, services, processes, or components, other than general-purpose AI models, under a free and open-source licence. The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems and third parties that supply tools, services, components or processes that are used for or integrated into high-risk AI systems. When developing those voluntary model terms, the AI Office shall take into account possible contractual requirements applicable in specific sectors or business cases. The voluntary model terms shall be published and be available free of charge in an easily usable electronic format. Related: Recitals 88, 89, and 90

5. Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights, confidential business information and trade secrets in accordance with Union and national law.

Feedback – We are working to improve this tool. Please send feedback to Taylor Jones at taylor@futureoflife.org

View the official text, or browse it online using our AI Act Explorer. The text used in this tool is the ‘Artificial Intelligence Act (Regulation (EU) 2024/1689), Official Journal version of 13 June 2024’. Interinstitutional File: 2021/0106(COD)