An introduction to the Code of Practice for the AI Act

03 Jul, 2024

Updated: 22 April 2025. This blog post will be updated as new information becomes available.

This summary, detailing the Code of Practice for general-purpose AI (GPAI) model providers, was put together by Jimmy Farrell, EU AI policy co-lead at Pour Demain, and Tekla Emborg, Policy Researcher at Future of Life Institute. For further questions, please reach out to jimmy.farrell@pourdemain.eu


As AI Act implementation gradually unfolds, it is important to understand the different mechanisms of enforcement included in the Regulation. One of the most important is the general-purpose AI (GPAI) Code of Practice, which is currently being developed by the AI Office and a wide range of stakeholders.

Coming up this post:


A quick summary on the Code of Practice:

  • Purpose: The AI Act Code of Practice (introduced in Article 56) is a set of guidelines for compliance with the AI Act. They will be a crucial tool for ensuring compliance with the EU AI Act obligations, especially in the interim period between when General Purpose AI (GPAI) model provider obligations come into effect (August 2025) and the adoption of standards (August 2027 or later). Though they are not legally binding, GPAI model providers can adhere to the Code of Practice to demonstrate compliance with GPAI model provider obligations until European standards come into effect.
  • Process: The Code is being developed through a multi-stakeholder process, involving academic and independent experts, GPAI model providers, downstream deployers, members of civil society, and more.
  • Content: The Code has three sections. The first two, Transparency and Copyright, apply to all GPAI model providers. The third, Safety and Security Section, only applies to providers of GPAI models with systemic risk. For each section, the Code lays down certain commitments and corresponding measures for how providers can live up to the commitments.
  • Implementation: The EU AI Office will assess the adequacy of the Code and may approve it through an implementing act. If a Code of Practice cannot be finalized, the Commission may provide common rules for implementation.

Introduction

This blog post explains the concept, process and importance of the Code of Practice, an AI Act tool to bridge the interim period between obligations for general-purpose AI (GPAI) model providers coming into force and the eventual adoption of harmonised European GPAI model standards. Following the publication of the third draft of the Code of Practice on 11th of March, this post summarises the most important and up-to-date information. 

Background: Standards and the need for a Code of Practice 

Following the entry into force of the AI Act on 1 August, obligations within the Regulation will be phased-in gradually, as detailed in our overview of the implementation timeline, with provisions on prohibited AI systems already in effect since February 2025. In the meantime, the complex process of developing harmonised European standards that operationalise AI Act obligations has begun. Whilst an official standardisation request has been adopted by the Commission and approved by CEN-CENELEC regarding standards for AI systems,1 an equivalent request on GPAI model standards is yet to be drafted. When such a standardisation request will be issued depends largely on how effectively the Code of Practice implements the relevant obligations under the AI Act. The standardisation process is detailed in a previous blog post. 

Under the AI Act, obligations for GPAI models, detailed in Articles 5055, are enforceable twelve months2 after the Act enters into force (2 August 2025). However, the European standardisation process, involving mostly3 the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC), often takes up to three years.4 This process can last even longer with more technical standards, such as those for GPAI, and if drafted in coordination with International standards,5 as prescribed in the AI Act.6 The multi-stakeholder engagement and consensus building approaches characteristic of standards setting further expand the time-line. Thus, GPAI model provider obligations are not likely to be operationalised as technical standards any time soon.

Legal requirements for the Code of Practice

Article 56 of the AI Act outlines the Code of Practice as a placeholder mode of compliance to bridge the gap between GPAI model provider obligations coming into effect (twelve months) and the adoption of standards (three years or more). While not legally binding, GPAI model providers can rely on the Code of Practice to demonstrate compliance with GPAI model provider obligations in Articles 53 and 55 until standards are developed. These obligations include:7

  • Provision of technical documentation to the AI Office and National Competent Authorities
  • Provision of relevant information to providers downstream that seek to integrate their model into their AI or GPAI system (eg. capabilities and limitations)
  • Summaries of training data used
  • Policies for complying with existing Union copyright law

For GPAI models with systemic risk (models trained above the 1025 FLOPS threshold),  further obligations include:8

  • State of the art model evaluations
  • Risk assessment and mitigation
  • Serious incident reporting, including corrective measures
  • Adequate cybersecurity protection

Providers who fail to comply with the Code of Practice will have to prove compliance to the above obligations to the Commission by alternative, possibly more burdensome and time-consuming means.9

The Code of Practice is likely to form the basis for the GPAI standards. The contents of the Code therefore are intended to reflect the intentions of the AI Act faithfully, including with regard to health, safety and fundamental rights concerns.

What? Code of Practice content (third draft)

Since the drafting process began in October 2024, three draft versions of the Code of Practice have been published. While the final version is expected in early May, the most recent third draft of the Code from 11th of March gives a good indication of the structure and content of the expected final version. The drafting Chairs and Vice-Chairs have set up an interactive web-page with the full text, FAQ, and summaries.

Overall, the Code has three sections. The first two, Transparency and Copyright, apply to all GPAI model providers. The third, Safety and Security Section, only applies to  providers of GPAI models with systemic risk (above the 10^25 threshold), currently a very small group of 5-15 companies worldwide.10 For each section, the Code lays down certain commitments and corresponding measures for how providers can live up to the commitments.

Transparency Section (all GPAI model providers)

Under this section, signatories commit to drawing up and keeping up-to-date model documentation; providing relevant information to downstream providers and to the AI Office upon request; and to ensure the quality, security and integrity of the documented information. The Code includes a Model Documentation Form that will be fully interactive. This section is not applicable to open-source AI models unless the models qualify as GPAIs with systemic risk.

Copyright Section (all GPAI model providers)

Signatories commit to drawing up, keeping up-to-date, and implementing a copyright policy. According to the measures in the Code, this policy shall ensure that signatories only reproduce and extract lawfully accessible copyright-protected content when crawling the World Wide Web; comply with rights reservations; mitigate the risk of production of copyright-infringing output; designate a point of contact; and allow for the submission of complaints concerning non-compliance. 

Safety and security section (only GPAI model with systemic risk providers)

This section only concerns providers of GPAI models with systemic risk. Signatories commit to adopt and implement a Safety and Security Framework. This will detail risk assessment, mitigation, and governance to keep systemic risks within acceptable levels. Signatories will assess and mitigate risks along the entire model lifecycle, identify systemic risks and analyse these to understand the severity and probability, and lay down risk acceptance criteria. This includes implementing appropriate state-of-the-art technical safety mitigations and security mitigations to prevent risks from unauthorised access to unreleased model weights, including from insider-threats. Signatories will report their implementation of the Code through Safety and Security Model Reports for each of their GPAI models with systemic risk. Further, signatories will allocate responsibility internally; carry out independent external assessment unless exceptions apply; carry out serious incident monitoring; adopt non-retaliation protections; notify the AI Office at specified milestones; and document and publish relevant information whenever relevant to the public understanding of systemic risks stemming from GPAI models.

Who? Scope of the GPAI model provider definition

The first part of the Code is relevant to all providers of GPAI models. ‘General-purpose AI models’ are defined under the AI Act as models that display significant generality and are capable of competently performing a wide range of distinct tasks and that can be integrated into a variety of downstream systems or applications.11 The model release mode (open weights, API, etc.) does not matter for the sake of this definition, except when the model is used for research, development or prototyping activities prior to market placement.12  GPAI model providers are natural or legal persons, public authorities, agencies or other bodies that develop a GPAI model or have a GPAI model developed and place it on the market under its own name or trademark, whether for payment or for free.13

Further, the second part of the Code is relevant to entities providing ‘GPAI models systemic risk’. ‘Systemic risk’ is defined as specific to the high-impact capabilities of GPAI models, having a significant impact on the Union market. This could be due to their reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.14 Models are presumed to qualify as having high impact capabilities when the cumulative amount of computation used for its training is greater than 10(^25) floating point operations (FLOPs). This is a rebuttable presumption that does not automatically qualify models. Currently, it is estimated that 11 providers worldwide provide models that surpass this threshold.

While the AI Act lays down these definitions, the Commission will issue further guidance about what this means in practice. This importantly includes clarifying guidelines on obligations regarding model modifications, such as fine-tuning.

How? Drafting process

Due to the twelve month timeline for GPAI model provider obligations coming into force, the Code of Practice must be done by 1 May 2025. This is intended to give the Commission, following a review by the AI Office and AI Board,15 time to approve or reject the Code with an implementing act.16 If the Code is rejected, the Commission can develop alternative methods of enforcement.

The drafting process has been ongoing since October 2024. It has included more than a thousand stakeholders, following an open call for expression of interest, who have provided written inputs on three different Code drafts. The participants included a range of actors including GPAI model providers, downstream providers, trade associations, academics, independent experts, and civil society organisations. This multi-stakeholder process was led by thirteen independent Chairs and Vice-Chairs

Working group structure

The Chairs divided the drafting into four different content categories in accordance with the GPAI model section of the AI Act:

  • Working Group 1: Transparency and copyright-related rules

Detailing documentation to downstream providers and the AI Office on the basis of Annexes XI and XII to the AI Act, policies to be put in place to comply with Union law on copyright and related rights, and making publicly available a summary about the training content.

  • Working Group 2: Risk identification and assessment measures for systemic risks

Detailing the risk taxonomy based on a proposal by the AI Office and identifying and detailing relevant technical risk assessment measures, including model evaluation and adversarial testing.

  • Working Group 3: Risk mitigation measures for systemic risks

Identifying and detailing relevant technical risk mitigation measures, including cybersecurity protection for the general-purpose AI model and the physical infrastructure of the model.

  • Working Group 4: Internal risk management and governance for general-purpose AI model providers

Identifying and detailing policies and procedures to operationalise risk management in internal governance of general-purpose AI model providers, including keeping track of, documenting, and reporting serious incidents and possible corrective measures.

Source: European Commission

Plenary

After each new draft iteration, the Chairs hosted plenary sessions to answer questions and allow for stakeholder presentations. The Plenaries were divided into four working groups, based on the different content categories outlined above, with only one representative allowed from each organisation in every working group. The “kick-off” plenary happened on 30 September 2024 – a virtual meeting featuring nearly a thousand attendees17 across industry, rightsholders, civil society and academia.18 Since then, three further plenaries have taken place for all working groups. In parallel to the four working groups, dedicated workshops featuring GPAI model providers and WG Chair/Vice-Chairs took place  to inform each iterative drafting round, as these are seen as the main addressees of the CoP. Further, there has also been a separate workshop for civil society organisations.

The drafting process has generally stuck to the timeline as initially set out, with one exception being a two-week delay to the release of the third draft and plenary round. Following the three drafting rounds, plenary workshops and provider-only workshops, the final version of the Code of Practice is expected to be shared in a Closing Plenary in May 2025.

Figure 1: CoP drafting process – Source: European Commission

Chairs and Vice-Chairs

The Chairs and Vice-Chairs have been a crucial component of the Code of Practice drafting process. They were designated based on demonstrated expertise in relevant areas, ability to fulfill the role (time commitments and operational experience) and independence, referring to “no financial interest or other interest, which could affect their independence, impartiality and objectivity”. They have been the “pen holders” responsible for collating the input from all stakeholders into one succinct Code of Practice. The Chairs, and their respective background, are listed below:

Working Group 1: Transparency and copyright-related rules

NameRoleExpertiseCountry
Nuria OliverCo-chairDirector of the ELLIS Alicante FoundationSpain
Alexander Peukert Co-chairProfessor of Civil, Commercial, and Information Law at Goethe University Frankfurt am MainGermany
Rishi BommasaniVice ChairSociety Lead at the Stanford Center for Research on Models as part of the Stanford Institute for Human-Centered AIUS
Céline Castets-RenardVice ChairFull Law Professor at the Civil Law Faculty, University of Ottawa, and Research Chair Holder Accountable AI in a Global ContextFrance

Working Group 2: Risk identification and assessment, including evaluations

NameRoleExpertiseCountry
Matthias Samwald ChairAssociate Professor at the Institute of Artificial Intelligence at the Medical University of ViennaAustria
Marta ZiosiVice ChairPostdoctoral Researcher at the Oxford Martin AI Governance InitiativeItaly
Alexander ZacherlVice ChairIndependent Systems Designer. Previously at UK AI Safety Institute and DeepMindGermany

Working Group 3: Technical risk mitigation

NameRoleExpertiseCountry
Yoshua BengioChairFull Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute (Turing Award Winner)Canada
Daniel PriviteraVice ChairFounder and Executive Director of the KIRA CenterItaly and Germany
Nitarshan RajkumarVice ChairPhD candidate researching AI at the University of CambridgeCanada

Working Group 4: Internal risk management and governance of General-purpose AI providers

NameRoleExpertiseCountry
Marietje SchaakeChairFellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centred AINetherlands
Markus AnderljungVice ChairDirector of Policy and Research at the Centre for the Governance of AISweden
Anka ReuelVice ChairComputer Science Ph.D. candidate at Stanford UniversityGermany

The time commitment for these consequential positions has been significant. However, for financial independence reasons, the Chair or Vice-Chair positions are all unpaid (this also applies to all Plenary participants), but they have been supported by external contractors, namely a consortium of consultancies including French consultancy Wavestone.

When? Next steps

The Chairs and Vice-Chairs are expected to present the final Code of Practice in a Closing Plenary in May, 2025. Subsequently, the AI Office and the AI Board will publish their assessment of the Code. The Commission may decide to approve of the Code by way of an implementing act, which would give it general validity within the Union.19 The AI Act specifies that a Code of Practice shall be ready at the latest by 2 May 2025.20 Given the slight delay in the third draft, it remains to be seen whether the Chairs and AI Office will manage to adhere to this timeline. 

If by 2 August 2025 a Code of Practice cannot be finalised, the Commission is empowered to provide common rules for the implementation of the obligations for GPAI model providers through implementing acts. The Commission has emphasised in the AI Continent Action Plan that implementing measures will be in place in time for the entry into application.

The Code of Practice addresses ‘signatories’ throughout, ie. entities with obligations under the AI Act who sign up to use the Code as a means of demonstrating compliance with their obligations under the AI Act. The exact meaning and implications of being a signatory, as well as the process by which entities can sign up, is unclear at this stage –  for example, whether companies are signatories if they use parts of the Code to demonstrate compliance with certain obligations but find other ways to demonstrate compliance with other measures. It is also unclear whether there will be particular consequences for signatories failing to comply with the Code compared to non-signatories failing to comply with the AI Act.


Notes and references

  1. Standardisation request for AI systems ↩︎
  2. Article 113(b) ↩︎
  3. The European Telecommunications Standards Institute (ETSI) may also be involved. ↩︎
  4. CEN-CENELEC ↩︎
  5. ISO ↩︎
  6. Article 40(3) ↩︎
  7. Article 53 ↩︎
  8. Article 55 ↩︎
  9. Article 53(4) and 55(2) ↩︎
  10. Safety and Security FAQ ↩︎
  11. Article 3(63) ↩︎
  12. However, note that the release mode does matter for the applicable obligations as providers of GPAI models that are released under a free and open-source license are exempt from the obligations under Article 53 (1)(a) and (b) (see Article 53(2)). ↩︎
  13. Article 3(3) ↩︎
  14. Article 3(65) ↩︎
  15. European Commission: Artificial Intelligence – Questions and Answers ↩︎
  16. Article 56(9) ↩︎
  17. Number disclosed in Commission press release ↩︎
  18. Euractiv article discussing diversity of participants ↩︎
  19. Article 56(6) ↩︎
  20. Article 56(9) ↩︎
This post was published on 3 Jul, 2024

Related articles

The AI Office is hiring a Lead Scientific Advisor for AI

This opportunity has now passed. A very important job opening has opened up at the European AI Office: They are hiring for the Lead Scientific Advisor for AI. Application deadline is 13 December 2024. Based on the European Union Employment Advisor, the monthly basic...

The AI Act: Responsibilities of the European Commission (AI Office)

If you are unsure who is implementing and enforcing the new digital law and what the specific time frames are, you might find this post—and our post on the responsibilities of the EU Member States—very helpful. The tables below provide a comprehensive list of all...