Updated: 30 October 2024. This blog post will be updated as new information becomes available.
This summary, detailing the Code of Practice for General Purpose AI model providers, was put together by Jimmy Farrell, EU AI policy lead at Pour Demain, and Pour Demain Fellow Afek Shamir. For further questions, please reach out to jimmy.farrell@pourdemain.eu.
As AI Act implementation gets underway, it is important to understand the different mechanisms of enforcement included in the regulation. One of the most important are the Codes of Practice, which are currently being developed by the AI Office and a wide range of stakeholders.
A quick summary on the Codes of Practice:
- Purpose: The AI Act Codes of Practice (introduced in Article 56) are a set of guidelines for compliance with the AI Act. They will be a crucial tool for ensuring compliance with the EU AI Act’s obligations, especially in the interim period between when General Purpose AI (GPAI) model provider obligations come into effect (August 2025) and the adoption of standards (August 2027 or later). Though they are not legally binding, compliance with these guidelines will serve as a ‘presumption of conformity’ with GPAI model provider obligations until more robust standards come into effect.
- Process: The codes are being developed through a multi-stakeholder process, involving working groups, academic and independent experts, AI model providers, members of civil society, and more.
- Content: Though the structure has been outlined, the content of the Codes of Practice is yet to be determined. It will contain objectives, measures, and key performance indicators (KPIs).
- Implementation: The EU AI Office will assess the adequacy of the codes and may approve them through an implementing act. If a code of practice cannot be finalized, the Commission may provide common rules for implementation.
Introduction
This blog post explains the concept, process and importance of the Code of Practice, an AI Act tool to bridge the interim period between obligations for General Purpose AI (GPAI) model providers coming into force and the eventual adoption of harmonised European GPAI model standards. Following the EU AI Offices’ detailed July 30th announcement of how this process will unfold, this post summarises the most important and up-to-date information.
Standards and the need for a Code of Practice
Following the final approval of the AI Act by the European Council on 21st May1, the publishing of the AI Act in the official EU journal on 12th July2, and the entry into force on 1st August, obligations within the regulation will be phased-in gradually over the next three years, as detailed in our implementation timeline. In the meantime, the complex process of developing harmonised European Standards, in order to operationalise the Act’s obligations, has begun. Whilst an official standardisation request has been adopted by the Commission and approved by CEN-CENELEC regarding standards for AI systems3, a similar request on GPAI model standards is yet to be drafted. When such a standardisation request will be issued depends largely on how effectively the Code of Practice can implement the relevant obligations under the AI Act.
The standardisation process is also detailed in another blog post.
Under the AI Act, obligations for General Purpose AI models, detailed in Articles 50–55, are enforceable twelve months4 after the Act enters into force (August 2025). However, the European standardisation process, involving mostly5 the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC), can often take up to three years6. This process can take even longer with more technical standards, such as those for GPAI, and if drafted in coordination with International standards7, as prescribed in the AI Act8. The need for multi-stakeholder engagement and the consensus building approach further take up yet more time. Thus, GPAI model provider obligations are not likely to be operationalised as technical standards any time soon.
Code of Practice
Article 56 of the AI Act outlines the Code of Practice as a placeholder mode of compliance to bridge the gap between GPAI model provider obligations coming into effect (twelve months after entry into force) and the adoption of standards (three years or more after entry into force). While not legally binding, compliance with measures set out in the Codes of Practice by GPAI model providers will serve as a presumption of conformity with GPAI model provider obligations in Articles 53 and 55 until standards come into effect. These obligations include9:
- Provision of technical documentation to the AI Office and National Competent Authorities
- Provision of relevant information to downstream providers that seek to integrate their model into their AI or GPAI system (eg. capabilities and limitations)
- Summaries of training data used
- Policies for complying with existing Union copyright law
For GPAI models with systemic risk (models trained above the 10^25 FLOPS threshold), further obligations include10:
- State of the art model evaluations
- Risk assessment and mitigation
- Serious incident reporting, including corrective measures
- Adequate cybersecurity protection
Providers who fail to comply with the Code of Practice will have to prove compliance to the above obligations to the Commission by alternative means11; possibly more burdensome and time-consuming. This situation is currently playing out in a parallel case in EU digital policy, with X (formerly Twitter) pulling out of the Strengthened Code of Practice on Disinformation12, which was recently integrated as a code of conduct under the co-regulatory framework of the Digital Services Act13.
The Code of Practice is likely to form the basis for the GPAI standards. The contents of the Code therefore are intended to reflect the intentions of the AI Act faithfully, including with regard to health, safety and fundamental rights concerns.
Drafting process
Due to the twelve month timeline for GPAI model provider obligations coming into force, the Code of Practice must be drafted within nine months of the AI Act entering into force (by May 1st 2025). This is intended to give the Commission, following a review by the AI Office and AI Board14, time to approve or reject them via implementing act15, and if the latter, to develop alternative methods of enforcement.
In a recent communication, the Commission has outlined the structure of the drafting process, including a call for expression of interest towards a diverse range of stakeholders to participate in the Code of Practice Plenary, the opening of a multi-stakeholder consultation, and parallel dedicated “workshops” for providers and Plenary working group (WG) Chairs and Vice Chairs.
The Commission has outlined that the structure of the drafting process includes a call for expression of interest towards a diverse range of stakeholders to participate in the Code of Practice Plenary, the opening of a multi-stakeholder consultation, and parallel dedicated “workshops” for providers and Plenary working group (WG) Chairs and Vice Chairs. This call for expression of interest and first multi-stakeholder consultation have now passed, with the Commission informing relevant stakeholders concerning the status of their participation during the course of September 2024.
The following table provides further information on each opportunity, deadline, participants involved and eligibility criteria of each participant:
Opportunity | Deadline | Participants* | Eligibility Criteria |
---|---|---|---|
Participation in CoP drafting process (Plenary) | 25 August 2024 (Deadline Passed) | Chairs or Vice-Chairs of WGs (Academic or other independent expert) | • Demonstrated expertise in relevant areas and • Ability to fulfil the role (time commitments and operational experience) and • Independence** |
GPAI model providers | • Existing operations in the EU or • Planned operations in the EU | ||
• Downstream providers • Other industry organisations | • Existing operations in the EU or • Planned operations in the EU and • Demonstrated legitimate interest | ||
• Academia • Independent experts • Related organisations | Demonstrate relevant expertise Can also volunteer for Chair/Vice Chair of WGs | ||
Other stakeholder organisations (including civil society organisations representing a specific stakeholder group or organisations representing rightsholders) | • Presence in the EU and • Demonstrated legitimate interest and • Demonstrated representation of a relevant stakeholder group | ||
Multi-stakeholder consultation | 18 September 2024 (Deadline passed) | All stakeholders | N/A |
Provider Workshops* | N/A | • GPAI model providers • Chairs/Vice chairs of WGs | N/A |
** Independence of the Chairs or Vice-Chairs refers to “no financial interest or other interest, which could affect their independence, impartiality and objectivity”.
Plenary
The Plenary is divided into the following four working groups, based on the different content categories outlined in the GPAI model section of the AI Act mentioned above:
- Working Group 1: Transparency and copyright-related rules
Detailing out documentation to downstream providers and the AI Office on the basis of Annexes XI and XII to the AI Act, policies to be put in place to comply with Union law on copyright and related rights, and making publicly available a summary about the training content. - Working Group 2: Risk identification and assessment measures for systemic risks
Detailing the risk taxonomy based on a proposal by the AI Office and identifying and detailing relevant technical risk assessment measures, including model evaluation and adversarial testing. - Working Group 3: Risk mitigation measures for systemic risks
Identifying and detailing relevant technical risk mitigation measures, including cybersecurity protection for the general-purpose AI model and the physical infrastructure of the model. - Working Group 4: Internal risk management and governance for general-purpose AI model providers
Identifying and detailing policies and procedures to operationalise risk management in internal governance of general-purpose AI model providers, including keeping track of, documenting, and reporting serious incidents and possible corrective measures.
Source: European Commission
After eligible participants were selected and placed into each WG (participants may be in multiple WGs, but only one representative may be present from each organisation), the Plenary got together for a “Kick-off” on 30 September 2024. This virtual meeting, featuring nearly 1000 attendees16, informed participants about the drafting procedure and the topics to be discussed in each Working Group. The Commission was said to have received diverse input, with industry, rightsholders, civil society and academia all represented17.
In turn, one week prior to the first plenary meeting in November, the preliminary draft of the Code of Practice will be shared to participating stakeholders. This will, in large part, be based on the almost 430 submissions the Commission received from the multi-stakeholder consultation18. These uncovered a wide array of stances between GPAI providers and other stakeholders19. The receipt of the first draft will initiate an iterative process of three remote WG session drafting rounds, which will take place over the course of the 9-month drawing up period.
In parallel, dedicated workshops featuring GPAI model providers and WG Chair/Vice-Chairs will meet to inform each iterative drafting round. This extra level of involvement is justified by the Commission based on GPAI model providers being the “main addressees” of the Code of Practice, and thus the most consequential stakeholder. To ensure the transparency of these additional workshops, the AI Office has committed to sharing meeting minutes to all Plenary participants throughout the process.
Following the three drafting rounds and provider-only workshops, and at the end of the 9 month period in April 2025, the final version of the Code of Practice will be shared in a Closing Plenary.
Chairs and Vice-Chairs
A crucial component of the Code of Practice drafting process are the roles of Chair and Vice-Chairs of each WG, as well as a coordinating Chair who will ensure coherence over all four WGs. They will be primarily responsible for collating the input from all stakeholders into one succinct Code of Practice; a role often referred to as “the pen holder”. These roles have now been chosen and published by the Commission in light of their technical expertise, time availability, operational experience, and independence from conflicting financial or other interests.
Working group Chairs directory
The chairs, and their respective background, are listed below:
Working Group 1: Transparency and copyright-related rules
Name | Role | Expertise | Country |
---|---|---|---|
Nuria Oliver | Co-chair | Director of the ELLIS Alicante Foundation | Spain |
Alexander Peukert | Co-chair | Professor of Civil, Commercial, and Information Law at Goethe University Frankfurt am Main | Germany |
Rishi Bommasani | Vice Chair | Society Lead at the Stanford Center for Research on Models as part of the Stanford Institute for Human-Centered AI | US |
Céline Castets-Renard | Vice Chair | Full Law Professor at the Civil Law Faculty, University of Ottawa, and Research Chair Holder Accountable AI in a Global Context | France |
Working Group 2: Risk identification and assessment, including evaluations
Name | Role | Expertise | Country |
---|---|---|---|
Matthias Samwald | Chair | Associate Professor at the Institute of Artificial Intelligence at the Medical University of Vienna | Austria |
Marta Ziosi | Vice Chair | Postdoctoral Researcher at the Oxford Martin AI Governance Initiative | Italy |
Alexander Zacherl | Vice Chair | Independent Systems Designer. Previously at UK AI Safety Institute and DeepMind | Germany |
Working Group 3: Technical risk mitigation
Name | Role | Expertise | Country |
---|---|---|---|
Yoshua Bengio | Chair | Full Professor at Université de Montréal, and the Founder and Scientific Director of Mila – Quebec AI Institute (Turing Award Winner) | Canada |
Daniel Privitera | Vice Chair | Founder and Executive Director of the KIRA Center | Italy and Germany |
Nitarshan Rajkumarl | Vice Chair | PhD candidate researching AI at the University of Cambridge | Canada |
Working Group 4: Internal risk management and governance of General-purpose AI providers
Name | Role | Expertise | Country |
---|---|---|---|
Marietje Schaake | Chair | Fellow at Stanford’s Cyber Policy Center and at the Institute for Human-Centred AI | Netherlands |
Markus Anderljung | Vice Chair | Director of Policy and Research at the Centre for the Governance of AI | Sweden |
Anka Reuell | Vice Chair | Computer Science Ph.D. candidate at Stanford University | Germany |
Other tasks for the Chair and Vice-Chairs include:
- Synthesising submissions by other Plenary participants
- Moderating, scheduling and facilitating WG discussions
- Facilitate the delivery and quality of efficient drafting
- Setting and carrying out agendas
- Participating in separate workshops, monitoring progress and regularly reporting on relevant activities
Unsurprisingly, for such consequential positions, the time commitment necessary is significant. Although Chairs and Vice-Chairs will be logistically and administratively supported by external contractors (a consortium of consultancies including French consultancy Wavestone20) as well as the AI Office, the roles will require weekly commitments and intensive working periods throughout the 9-month period from September 2024 to April 2025.
In addition, for financial independence reasons, the Chair or Vice-Chair positions are all unpaid (as well as all Plenary participants). Nevertheless, these positions are likely to be hugely significant towards producing a final Code of Practice that is both technically feasible and protective of EU citizens’ health, safety and fundamental rights.
Code of Practice content
Although the broad structure of the drafting process has been announced, the actual content of the Code is still an open question to be answered with the emergence of upcoming drafts.
Nevertheless, the code will contain objectives, measures, and key performance indicators (KPIs), in a similar (but non-identical) fashion to the Strengthened Code of Practice on disinformation. This approach, incorporating a wide range of topics into specific WGs, involving all relevant stakeholders, and featuring a wide array of guidelines from highly general to highly specific, is intended to result in a comprehensive and complete Code of Practice.
Finally, as of yet, nothing has been announced regarding processes for post-Code of Practice monitoring and updating. By mode of speculation, a permanent task-force may be set up – featuring signatories of the Code and other oversight bodies – to review and adapt the Code based on technological, societal, market and legislative developments. This is in line with the AI Act itself, which prescribes the AI office to facilitate the “frequent review and adaptation of the codes of practice”21.
Next steps
The two stakeholder participation windows are closed, the chairs and vice-chairs have been chosen, and the “Kick-off” Plenary has taken place. For the upcoming months, participants in the Plenary and provider-only workshops will begin an iterative process in tandem with the AI Office to arrive at mutually agreeable guidelines for General Purpose AI models and systems.
This blog post will be updated as new information becomes available.
Notes & References
- European Council approves AI Act ↩︎
- EUR-Lex ↩︎
- Standardisation request for AI systems ↩︎
- Article 113(b) ↩︎
- The European Telecommunications Standards Institute (ETSI) may also be involved. ↩︎
- CEN-CENELEC ↩︎
- ISO ↩︎
- Article 40(3) ↩︎
- Article 53 ↩︎
- Article 55 ↩︎
- Article 53(4) and 55(2) ↩︎
- Internal Market Commissioner Thierry Breton insisting on pursuing X’s compliance via alternative means. ↩︎
- Strengthened Code of Practice on disinformation ↩︎
- European Commission: Artificial Intelligence – Questions and Answers ↩︎
- Article 56(9) ↩︎
- Number disclosed in Commission press release ↩︎
- Euractiv article discussing diversity of participants ↩︎
- Commission press release ↩︎
- Euractiv article mentioning disagreements in the Plenary ↩︎
- MLex article discussing the role of external consultancies ↩︎
- Article 56(8) ↩︎