In this article we provide an outline of the key dates relevant to the implementation of the AI Act. We also list some secondary legislation that the Commission might add to supplement the AI Act, and some guidelines it may publish to support compliance efforts.
Compliance deadlines
This timeline was correct at the time of writing, but is now out of date – for an up-to-date timeline see this comprehensive implementation timeline.
By 6 months after entry into force:
- Prohibitions on unacceptable risk AI. (Article 113)
By 9 months after entry into force:
- Codes of practice for General Purpose AI (GPAI) must be finalised. (Article 113)
By 12 months after entry into force:
- GPAI rules apply. (Article 113)
- Appointment of Member State competent authorities. (Article 70)
- Annual Commission review and possible amendments on prohibitions. (Article 112)
By 18 months after entry into force:
- Commission issues implementing acts creating a template for high risk AI providers’ post-market monitoring plan. (Article 6)
By 24 months after entry into force:
- Obligations on high-risk AI systems specifically listed in Annex III, which includes AI systems in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration and administration of justice now apply. (Article 111)
- Member states to have implemented rules on penalties, including administrative fines. (Article 57)
- Member state authorities to have established at least one operational AI regulatory sandbox. (Article 57)
- Commission review, and possible amendment of, the list of high-risk AI systems. (Article 112)
By 36 months after entry into force:
- Obligations on Annex I high risk AI systems apply. (Article 113)
- Obligations for high-risk AI systems that are not prescribed in Annex III but are intended to be used as a safety component of a product, or the AI is itself a product, and the product is required to undergo a third-party conformity assessment under existing specific EU laws, for example toys, radio equipment, in vitro diagnostic medical devices, civil aviation security and agricultural vehicles. (Article 113)
By the end of 2030:
- Obligations go into effect for certain AI systems that are components of the large-scale IT systems established by EU law in the areas of freedom, security and justice, such as the Schengen Information System. (Article 111)
Secondary legislation
The Commission can introduce delegated acts on:
- Definition of AI system. (Article 96)
- Criteria that exempt AI systems from high risk rules. (Article 6)
- High risk AI use cases. (Article 7)
- Thresholds classifying General Purpose AI models as systemic. (Article 51)
- Technical documentation requirements for high risk AI systems and GPAI. (Article 11)
- Conformity assessments. (Article 43)
- EU declaration of conformity. (Article 47)
The Commission’s power to issue delegated acts lasts for an initial and extendable period of five years. (Article 97)
The AI Office is to draw up codes of practice to cover, but not necessarily limited to, obligations for providers of general purpose AI models. Codes of practice should be ready nine months after entry into force at the latest and should provide at least a three-month period before taking effect. (Article 97)
The Commission can introduce implementing acts on:
- Approving codes of practice for GPAI and generative AI watermarking. (Article 56)
- Establishing the scientific panel of independent experts. (Article 68)
- Conditions for AI Office evaluations of GPAI compliance. (Article 92)
- Operational rules for AI regulatory sandboxes. (Article 57)
- Information in real world testing plans. (Article 60)
- Common specifications (where standards do not cover rules). (Article 41)
Commission guidelines
The Commission can provide guidance on:
- By 12 months after entry into force: High risk AI serious incident reporting. (Article 73)
- By 18 months after entry into force: Practical guidance on determining if an AI system is high risk, with list of practical examples of high-risk and non-high risk use cases. (Article 6)
- With no specific timeline, the Commission will provide guidelines on: (Article 96)
- The application of the definition of an AI system.
- High risk AI provider requirements.
- Prohibitions.
- Substantial modifications.
- Transparency disclosures to end-users.
- Detailed information on the relationship between the AI Act and other EU laws.
The Commission is to report on its delegated powers no later than nine months and before five years after entry into force. (Article 112)