Table of contents

Section 1: Classification of AI Systems as High-Risk

Article 6: Classification Rules for High-Risk AI Systems

Article 7: Amendments to Annex III

Section 2: Requirements for High-Risk AI Systems

Article 8: Compliance with the Requirements

Article 9: Risk Management System

Article 10: Data and Data Governance

Article 11: Technical Documentation

Article 12: Record-Keeping

Article 13: Transparency and Provision of Information to Deployers

Article 14: Human Oversight

Article 15: Accuracy, Robustness and Cybersecurity

Section 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties

Article 16: Obligations of Providers of High-Risk AI Systems

Article 17: Quality Management System

Article 18: Documentation Keeping

Article 19: Automatically Generated Logs

Article 20: Corrective Actions and Duty of Information

Article 21: Cooperation with Competent Authorities

Article 22: Authorised Representatives of providers of high-risk AI systems

Article 23: Obligations of Importers

Article 24: Obligations of Distributors

Article 25: Responsibilities Along the AI Value Chain

Article 26: Obligations of Deployers of High-Risk AI Systems

Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems

Section 4: Notifying Authorities and Notified Bodies

Article 28: Notifying Authorities

Article 29: Application of a Conformity Assessment Body for Notification

Article 30: Notification Procedure

Article 31: Requirements Relating to Notified Bodies

Article 32: Presumption of Conformity with Requirements Relating to Notified Bodies

Article 33: Subsidiaries of and Subcontracting by Notified Bodies

Article 34: Operational Obligations of Notified Bodies

Article 35: Identification Numbers and Lists of Notified Bodies Designated Under this Regulation

Article 36: Changes to Notifications

Article 37: Challenge to the Competence of Notified Bodies

Article 38: Coordination of Notified Bodies

Article 39: Conformity Assessment Bodies of Third Countries

Section 5: Standards, Conformity Assessment, Certificates, Registration

Article 40: Harmonised Standards and Standardisation Deliverables

Article 41: Common Specifications

Article 42: Presumption of Conformity with Certain Requirements

Article 43: Conformity Assessment

Article 44: Certificates

Article 45: Information Obligations of Notified Bodies

Article 46: Derogation from Conformity Assessment Procedure

Article 47: EU Declaration of Conformity

Article 48: CE Marking

Article 49: Registration

Section 1: Post-Market Monitoring

Article 72: Post-Market Monitoring by Providers and Post-Market Monitoring Plan for High-Risk AI Systems

Section 2: Sharing of Information on Serious Incidents

Article 73: Reporting of Serious Incidents

Section 3: Enforcement

Article 74: Market Surveillance and Control of AI Systems in the Union Market

Article 75: Mutual Assistance, Market Surveillance and Control of General Purpose AI Systems

Article 76: Supervision of Testing in Real World Conditions by Market Surveillance Authorities

Article 77: Powers of Authorities Protecting Fundamental Rights

Article 78: Confidentiality

Article 79: Procedure for Dealing with AI Systems Presenting a Risk at National Level

Article 80: Procedure for Dealing with AI Systems Classified by the Provider as a Not High-Risk in Application of Annex III

Article 81: Union Safeguard Procedure

Article 82: Compliant AI Systems Which Present a Risk

Article 83: Formal Non-Compliance

Article 84: Union AI Testing Support Structures

Section 4: Remedies

Article 85: Right to Lodge a Complaint with a Market Surveillance Authority

Article 86: A Right to Explanation of Individual Decision-Making

Article 87: Reporting of Breaches and Protection of Reporting Persons

Section 5: Supervision, Investigation, Enforcement and Monitoring in Respect of Providers of General Purpose AI Models

Article 88: Enforcement of Obligations on Providers of General Purpose AI Models

Article 89 : Monitoring Actions

Article 90: Alerts of Systemic Risks by the Scientific Panel

Article 91: Power to Request Documentation and Information

Article 92: Power to Conduct Evaluations

Article 93: Power to Request Measures

Article 94: Procedural Rights of Economic Operators of the General Purpose AI Model

Annexes

Search within the Act

2 May 2024 – The AI Act Explorer has now been updated with content from the European Parliament's 'Corrigendum' version from 19 April 2024. The content of the Act is unlikely to change any further.

Recital 60

Feedback – We are working to improve this tool. Please send feedback to Risto Uuk at risto@futureoflife.org

AI systems used in migration, asylum and border control management affect persons who are often in particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore particularly important to guarantee respect for the fundamental rights of the affected persons, in particular their rights to free movement, non-discrimination, protection of private life and personal data, international protection and good administration. It is therefore appropriate to classify as high-risk, insofar as their use is permitted under relevant Union and national law, AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies charged with tasks in the fields of migration, asylum and border control management as polygraphs and similar tools, for assessing certain risks posed by natural persons entering the territory of a Member State or applying for visa or asylum, for assisting competent public authorities for the examination, including related assessment of the reliability of evidence, of applications for asylum, visa and residence permits and associated complaints with regard to the objective to establish the eligibility of the natural persons applying for a status, for the purpose of detecting, recognising or identifying natural persons in the context of migration, asylum and border control management, with the exception of verification of travel documents. AI systems in the area of migration, asylum and border control management covered by this Regulation should comply with the relevant procedural requirements set by the Regulation (EC) No 810/2009 of the European Parliament and of the Council[32], the Directive 2013/32/EU of the European Parliament and of the Council[33], and other relevant Union law. The use of AI systems in migration, asylum and border control management should, in no circumstances, be used by Member States or Union institutions, bodies, offices or agencies as a means to circumvent their international obligations under the UN Convention relating to the Status of Refugees done at Geneva on 28 July 1951 as amended by the Protocol of 31 January 1967. Nor should they be used to in any way infringe on the principle of non-refoulement, or to deny safe and effective legal avenues into the territory of the Union, including the right to international protection.

[32] Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).

[33] Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and withdrawing international protection (OJ L 180, 29.6.2013, p. 60).

The text used in this tool is the ‘Artificial Intelligence Act, Corrigendum, 19 April 2024’. Interinstitutional File: 2021/0106(COD)