Last updated: 14 August 2025.
As AI Act implementation gradually unfolds, it is important to understand the different mechanisms of enforcement included in the Regulation. One of the most important is the general-purpose AI Code of Practice, which was developed by the AI Office and a wide range of stakeholders.
This summary, detailing the Code of Practice for general-purpose AI model providers, was put together by Jimmy Farrell, EU AI policy co-lead at Pour Demain, and Tekla Emborg, Policy Researcher at Future of Life Institute. For further questions, please reach out to jimmy.farrell@pourdemain.eu.
As AI Act implementation gradually unfolds, it is important to understand the different mechanisms of enforcement included in the Regulation. One of the most important is the general-purpose AI (GPAI) Code of Practice, which is currently being developed by the AI Office and a wide range of stakeholders.
Coming up in this post:
- Introduction
- Standards and the Need for a Code of Practice
- Brief Summary of Code of Practice Content
- Scope of the GPAI Model Provider Definition
- Signatures: Who Signed and What Does it Mean
- Enforcement of the Code of Practice
- Backstory: Drafting Process Paving the Road to the Code of Practice
A quick summary on the Code of Practice:
- Purpose: The AI Act Code of Practice (introduced in Article 56) is a set of guidelines for compliance with the AI Act. It is a crucial tool for ensuring compliance with the EU AI Act obligations, especially in the interim period between when General Purpose AI (GPAI) model provider obligations came into effect (August 2025) and the adoption of standards (August 2027 or later). Though they are not legally binding, GPAI model providers can adhere to the Code of Practice to demonstrate compliance with GPAI model provider obligations until European standards come into effect.
- Process: The Code was developed through a multi-stakeholder process, involving academic and independent experts, GPAI model providers, downstream deployers, members of civil society, and more.
- Content: The Code has three chapters. The first two, Transparency and Copyright, apply to all GPAI model providers. The third, Safety and Security chapter, only applies to providers of GPAI models with systemic risk. For each chapter, the Code lays down certain commitments and corresponding measures for how providers can live up to the commitments.
- Implementation: The Commission and the EU AI Board have confirmed that the GPAI Code is an adequate voluntary tool for providers of GPAI models to demonstrate compliance with the AI Act. Namely, the Code adequately covers the obligations provided for in Articles 53 and 55 of the AI Act relevant to providers of GPAI models and GPAI models with systemic risk.
Introduction
This blog post explains the concept, process and significance of the Code of Practice, an AI Act tool to bridge the interim period between obligations for general-purpose AI (GPAI) model providers coming into force and the eventual adoption of harmonised European GPAI model standards. Following the publication of the final Code of Practice on 10 July, and the confirmation by the Commission and the AI Board of the adequacy of the final Code, this post summarises the most important and up-to-date information. A comprehensive summary of the content of the Code is provided in another blog post.
Standards and the Need for a Code of Practice
Following the entry into force of the AI Act on 1 August 2024, obligations within the Regulation are phased-in gradually, as detailed in an earlier blog post, with provisions on prohibited AI systems in effect since February 2025 and provisions relating to GPAI models in effect since 2 August 2025. In the meantime, the complex process of developing harmonised European standards that operationalise AI Act obligations has begun. Whilst an official standardisation request has been adopted by the Commission and approved by CEN-CENELEC regarding standards for AI systems1, an equivalent request on GPAI model standards is yet to be drafted. When such a standardisation request will be issued depends largely on how effectively the Code of Practice implements the relevant obligations under the AI Act. The standardisation process is detailed in a separate blog post.
Under the AI Act, obligations for GPAI models, detailed in Articles 50-55, are enforceable twelve months2 after the Act enters into force (2 August 2025). However, the European standardisation process, involving mostly3 the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC), often takes up to three years4. This process can last even longer with more technical standards, such as those for GPAI, and if drafted in coordination with International standards5, as prescribed in the AI Act6. The multi-stakeholder engagement and consensus building approaches characteristic of standards setting further prolong the time-line. Thus, GPAI model provider obligations are not likely to be operationalised as technical standards any time soon.
Legal requirements for the Code of Practice
Article 56 of the AI Act outlines the Code of Practice as a placeholder mode of compliance to bridge the gap between GPAI model provider obligations coming into effect (twelve months) and the adoption of standards (three years or more). While not legally binding, GPAI model providers can rely on the Code of Practice to demonstrate compliance with GPAI model provider obligations in Articles 53 and 55 until standards are developed. These obligations include7:
- Provision of technical documentation to the AI Office and National Competent Authorities
- Provision of relevant information to providers downstream that seek to integrate models into their AI or GPAI system (e.g. capabilities and limitations)
- Summaries of training data used
- Policies for complying with existing Union copyright law
For GPAI models with systemic risk (models trained above the threshold of 10^25 floating point operations, or FLOP), further obligations include8:
- State of the art model evaluations
- Risk assessment and mitigation
- Serious incident reporting, including corrective measures
- Adequate cybersecurity protection
Providers who do not demonstrate compliance with the Code of Practice will have to prove compliance to the above obligations to the Commission by alternative, possibly more burdensome and time-consuming means.9
Brief Summary of Code of Practice Content
Since the drafting process began in October 2024, three draft versions of the Code of Practice were published before the final version in July 2025. The drafting Chairs and Vice-Chairs have set up an interactive web-page with the full text, FAQ, and summaries. A comprehensive overview of the content of the Code is also provided in another blog post. Below you find a short summary.
Overall, the Code has three chapters. The first two, Transparency and Copyright, apply to all GPAI model providers. The third, Safety and Security chapter, only applies to providers of GPAI models with systemic risk (above the 10^25 FLOP threshold), currently a small group of 5-15 companies worldwide10. The Code lays down a total of 12 commitments – one for each of the two first chapters and 10 for the Safety and Security Chapter – and corresponding measures for how providers can live up to the commitments.
Transparency chapter (all GPAI model providers)
Under this chapter, signatories commit to maintaining up-to-date, comprehensive documentation for every GPAI model they distribute within the EU (except for models that are free, open-source, and pose no systemic risk). This documentation must follow a standardized Model Documentation Form, detailing licensing, technical specs, use cases, datasets, compute and energy usage, and more. This documentation should be securely stored for at least ten years and made available, upon request, to the AI Office and downstream users. Public release of this information is encouraged to promote transparency.
Copyright chapter (all GPAI model providers)
Signatories commit to develop and regularly update a robust copyright policy that clearly defines internal responsibilities and complies with legal standards. They must ensure that data collected via web crawling is lawfully accessible, respect machine-readable rights signals like robots.txt, and avoid accessing websites flagged for copyright infringement. Technical safeguards should minimize the generation of infringing content, and terms of service must clearly prohibit unauthorized use. A designated contact point must be provided for copyright holders to submit complaints, with efficient and fair processes for handling them.
Safety and Security chapter (only GPAI model with systemic risk providers)
This chapter only concerns providers of GPAI models with systemic risk. Signatories must develop a state-of-the-art Safety and Security Framework before model release, outlining evaluation triggers, risk categories, mitigation strategies, forecasting methods, and organizational responsibilities. Systemic risks are to be identified through structured processes such as inventories, scenario analysis, and consultation with internal and external experts. Before progressing with development or deployment, signatories must evaluate whether identified risks are acceptable, applying defined risk-tier frameworks with built-in safety margins. A mandatory Safety and Security Model Report must be submitted prior to release and updated as risks evolve. To ensure organizational accountability, signatories must clearly assign oversight, ownership, monitoring, and assurance roles within their governance structures and ensure adequate resources, a strong risk culture, and protections for whistleblowers. Serious incidents must be promptly tracked, documented, and reported to regulators according to severity and tight deadlines. Finally, signatories are required to retain detailed records of safety and risk management activities for a minimum of ten years.
Scope of the GPAI Model Provider Definition
GPAI Model Providers
The Transparency and Copyright chapters of the Code are relevant to all providers of GPAI models. ‘General-purpose AI models’ are defined under the AI Act as models that display significant generality and are capable of competently performing a wide range of distinct tasks and that can be integrated into a variety of downstream systems or applications.11 As an indicative criterion, the GPAI Guidelines suggest that models trained on more than 10^23 FLOP that can generate language, text-to-image or text-to-video are to be considered GPAI.12 The model release mode (open weights, API, etc.) does not matter for the sake of the GPAI model definition, except when the model is used for research, development or prototyping activities prior to market placement.13 However, the release mode does matter for the applicable obligations as providers of GPAI models that are released under a free and open-source license are exempt from the obligations under Article 53 (1)(a) and (b).14 GPAI model providers are natural or legal persons, public authorities, agencies or other bodies that develop a GPAI model or have a GPAI model developed and place it on the market under its own name or trademark, whether for payment or for free.15
It is possible for a downstream modifier to become the provider of a GPAI model first provided by an upstream actor. The Commission guidelines suggest that this will only be the case if the modification leads to a ‘significant change in the model’s generality, capabilities, or systemic risk’.16 Such a change is presumed to happen if the downstream modifier uses more than one third compute for the modification relative to the compute used for the original model.17 If this value is unknown to the downstream provider, this threshold is replaced by one third of the threshold for the model being presumed to be a GPAI model, which is currently 10^23 FLOP.18
The GPAI Guidelines provide examples to clarify the scope. For instance, a model trained using more than 10^24 FLOP for the narrow task of increasing the resolution of images is not considered a GPAI model.19 While the model in the example satisfies the compute threshold, it can only competently perform a narrow set of tasks, so it is not considered general-purpose.
GPAI Model with Systemic Risk Providers
The Safety and Security chapter of the Code is relevant to entities providing GPAI models systemic risk. ‘Systemic risk’ is defined as specific to the high-impact capabilities of GPAI models, having a significant impact on the Union market.20 This could be due to their reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain. Models are presumed to qualify as having high impact capabilities when the cumulative amount of computation used for its training is greater than 10^25 FLOP. This is a rebuttable presumption. Currently, it is estimated that 11 providers worldwide provide models that surpass this threshold.
It is possible for downstream modifiers to become providers of a new GPAI model with systemic risk, based on similar considerations as outlined above for GPAI downstream modifiers. I.e. if the downstream modifier uses more than one third compute for the modification relative to the training compute for the original GPAI model with systemic risk, the downstream modifier becomes the provider.21 If the original amount of training compute is unknown to the downstream provider, the threshold is replaced by one third of the threshold for the model being presumed to be a GPAI model with systemic risk, which is currently 10^25 FLOP.
Providers of GPAI models with systemic risk can contest the classification by demonstrating that, despite surpassing the compute threshold, their model does not possess ‘high-impact capabilities’ that match or exceed the most advanced models.22
Signatures: Who Signed and What Does it Mean
The Commission has published the signatory form, process description and a list of signatories on its website. The signatories include the majority of companies developing the most advanced GPAI models, with the notable exceptions of META and companies based in China. As per the date of this update, the following companies signed the Code of Practice:
- Accexible
- AI Alignment Solutions
- Aleph Alpha
- Almawave
- Amazon
- Anthropic
- Bria AI
- Cohere
- Cyber Institute
- Domyn
- Dweve
- Euc Inovação Portugal
- Fastweb
- Humane Technology
- IBM
- Lawise
- Microsoft
- Mistral AI
- Open Hippo
- OpenAI
- Pleias
- re-inventa
- ServiceNow
- Virtuo Turing
- WRITER
The GPAI Guidelines specify that providers of GPAI models (with systemic risk) can demonstrate compliance with their obligations under the AI Act by adhering to the Code of Practice, now that the AI Board and the AI Office have published their confirming assessments of the Code. For signatories, the Commission will focus their enforcement activities on monitoring adherence to the Code, show these increased trust, and may take commitments to the Code into account as mitigating factors when fixing the amount of fines.23 Non-signatories, on the other hand, are expected to demonstrate compliance via other adequate means.24 These may receive a larger number of requests for information, and will typically need to provide more detailed information.
Enforcement of the Code of Practice
The GPAI rules took effect on 2 August 2, 2025, meaning all new models released from that date must comply. However, the Commission’s enforcement actions – such as requests for information, access to models, or model recalls – will only begin a year later, on 2 August 2, 2026. This grace period gives providers time to work with the AI Office to ensure they meet the standards. For models released before 2 August 2, 2025, providers have until 2 August 2, 2027 to bring them into compliance.
According to the GPAI Guidelines, the Commission will take a collaborative, staged and proportionate approach to its supervision, investigation, enforcement and monitoring of the GPAI provisions.25
The Commission may decide to approve of the Code by way of an implementing act, which would give it ‘general validity’ within the Union.26 It is unclear whether the Commission will choose to do so and what the legal implications of such an implementing act would be.
Academics and civil society organisations, as well as independent experts involved in drafting the code, have pointed out that adequate enforcement of the Code would require substantially more resources and staff than currently allocated. In particular, almost a threefold increase, compared to mid-2025 levels, in the number of staff in the Regulation and Compliance unit and AI Safety units of the AI Office. It is not yet clear whether such resources will be allocated.
Backstory: Drafting Process Paving the Road to the Code of Practice
The drafting process began in October 2024 and included more than a thousand stakeholders, following an open call for expression of interest. These stakeholders provided written inputs on three different Code drafts. The participants included a range of actors including GPAI model providers, downstream providers, trade associations, academics, independent experts, and civil society organisations. This multi-stakeholder process was led by thirteen independent Chairs and Vice-Chairs.
Working group structure
The Chairs divided the drafting into four different content categories in accordance with the GPAI model section of the AI Act:
- Working Group 1: Transparency and copyright-related rules
Detailing documentation to downstream providers and the AI Office on the basis of Annexes XI and XII to the AI Act, policies to be put in place to comply with Union law on copyright and related rights, and making publicly available a summary about the training content.
- Working Group 2: Risk identification and assessment measures for systemic risk
Detailing the risk taxonomy based on a proposal by the AI Office and identifying and detailing relevant technical risk assessment measures, including model evaluation and adversarial testing.
- Working Group 3: Risk mitigation measures for systemic risk
Identifying and detailing relevant technical risk mitigation measures, including cybersecurity protection for the general-purpose AI model and the physical infrastructure of the model.
- Working Group 4: Internal risk management and governance for general-purpose AI model providers
Identifying and detailing policies and procedures to operationalise risk management in internal governance of general-purpose AI model providers, including keeping track of, documenting, and reporting serious incidents and possible corrective measures.
Source: European Commission
Plenary
After each new draft iteration, the Chairs hosted plenary sessions to answer questions and allow for stakeholder presentations. The Plenaries were divided into four working groups, based on the different content categories outlined above, with only one representative allowed from each organisation in every working group. The “kick-off” plenary happened on 30 September 2024 – a virtual meeting featuring nearly a thousand attendees27 across industry, rightsholders, civil society, and academia.28 Three further plenaries took place for all working groups. In parallel to the four working groups, there were dedicated workshops featuring GPAI model providers and WG Chair/Vice-Chairs to inform each iterative drafting round, as these stakeholders were seen as the main addressees of the CoP. Further, there was a separate workshop for civil society organisations.
The early phases of the drafting process stuck to the timeline as initially set out. In the latter phase, there was a two-week delay to the release of the third draft and plenary round, and a month delay in the publication of the final Code, presented in a Closing Plenary on 3 July 2025 and published on 10 July.
Figure 1: CoP drafting process – Source: European Commission
Chairs and Vice-Chairs
The Chairs and Vice-Chairs were a crucial component of the Code of Practice drafting process. They were designated based on demonstrated expertise in relevant areas, ability to fulfill the role (time commitments and operational experience) and independence, referring to “no financial interest or other interest, which could affect their independence, impartiality and objectivity”. They were the “pen holders” responsible for collating the input from all stakeholders into one succinct Code of Practice. The Chairs, and their respective background, are listed below:
Working Group 1: Transparency and copyright-related rules
Name | Role | Expertise | Country |
---|---|---|---|
Nuria Oliver | Co-chair | Director of the ELLIS Alicante Foundation | Spain |
Alexander Peukert | Co-chair | Professor of Civil, Commercial, and Information Law at Goethe University Frankfurt am Main | Germany |
Rishi Bommasani | Vice Chair | Society Lead at the Stanford Center for Research on Models as part of the Stanford Institute for Human-Centered AI | US |
Céline Castets-Renard | Vice Chair | Full Law Professor at the Civil Law Faculty, University of Ottawa, and Research Chair Holder Accountable AI in a Global Context | France |
Working Group 2: Risk identification and assessment, including evaluations
Name | Role | Expertise | Country |
---|---|---|---|
Matthias Samwald | Chair | Associate Professor at the Institute of Artificial Intelligence at the Medical University of Vienna | Austria |
Marta Ziosi | Vice Chair | Postdoctoral Researcher at the Oxford Martin AI Governance Initiative | Italy |
Alexander Zacherl | Vice Chair | Independent Systems Designer. Previously at UK AI Safety Institute and DeepMind | Germany |
Working Group 3: Technical risk mitigation
Name | Role | Expertise | Country |
---|---|---|---|
Yoshua Bengio | Chair | Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute (Turing Award Winner) | Canada |
Daniel Privitera | Vice Chair | Founder and Executive Director of the KIRA Center | Italy and Germany |
Nitarshan Rajkumar | Vice Chair | PhD candidate researching AI at the University of Cambridge | Canada |
Working Group 4: Internal risk management and governance of General-purpose AI providers
Name | Role | Expertise | Country |
---|---|---|---|
Marietje Schaake | Chair | Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centred AI | Netherlands |
Markus Anderljung | Vice Chair | Director of Policy and Research at the Centre for the Governance of AI | Sweden |
Anka Reuel | Vice Chair | Computer Science Ph.D. candidate at Stanford University | Germany |
The time commitment for these consequential positions was significant. However, for financial independence reasons, the Chair or Vice-Chair positions were all unpaid (this also applied to all Plenary participants), but they were supported by external contractors, namely a consortium of consultancies including French consultancy Wavestone.
Notes and references
- Standardisation request for AI systems ↩︎
- Article 113(b) ↩︎
- The European Telecommunications Standards Institute (ETSI) may also be involved. ↩︎
- CEN-CENELEC ↩︎
- ISO ↩︎
- Article 40(3) ↩︎
- Article 53 ↩︎
- Article 55 ↩︎
- Article 53(4) and 55(2) ↩︎
- Safety and Security FAQ ↩︎
- Article 3(63) ↩︎
- Commission Guidelines paragraph 17 ↩︎
- However, note that the release mode does matter for the applicable obligations as providers of GPAI models that are released under a free and open-source license are exempt from the obligations under Article 53 (1)(a) and (b) (see Article 53(2) and GPAI Guidelines chapter 4) ↩︎
- Article 53(2) and GPAI Guidelines chapter 4 ↩︎
- Article 3(3) ↩︎
- Commission Guidelines paragraph 62 ↩︎
- Commission Guidelines paragraph 63 ↩︎
- Commission Guidelines paragraph 64 ↩︎
- Commission Guidelines paragraph 20 ↩︎
- Article 3(65) ↩︎
- Commission Guidelines paragraph 63 ↩︎
- Article 52(2) ↩︎
- Commission Guidelines paragraph 94 ↩︎
- Commission Guidelines paragraph 95 ↩︎
- Commission Guidelines paragraph 102 ↩︎
- Article 56(6) ↩︎
- Number disclosed in Commission press release ↩︎
- Euractiv article discussing diversity of participants ↩︎