An introduction to Codes of Practice for the AI Act

03 Jul, 2024

Updated: 21 August 2024. This blog post will be updated as new information becomes available.

As AI Act implementation gets underway, it is important to understand the different mechanisms of enforcement included in the regulation.

This summary, detailing the Code of Practice for General Purpose AI model providers, was put together by Jimmy Farrell, EU AI policy lead at Pour Demain. For further questions please reach out to him at jimmy.farrell@pourdemain.eu

Introduction

This blog post explains the concept, process and importance of the Code of Practice, an AI Act tool to bridge the interim period between obligations for General Purpose AI (GPAI) model providers coming into force and the eventual adoption of harmonised European GPAI model standards. Following the EU AI Offices’ detailed July 30th announcement of how this process will unfold, this post summarises the most important information. 

Standards and the need for a Code of Practice 

Following the final approval of the AI Act by the European Council on 21st May1, the publishing of the AI Act in the official EU journal on 12th July2, and the entry into force on 1st August, obligations within the regulation will be phased-in gradually over the next three years, as detailed in an earlier blog post. In the meantime, the complex process of developing harmonised European Standards, in order to operationalise the Act’s obligations, has begun. Whilst an official standardisation request has been adopted by the Commission and approved by CEN-CENELEC regarding standards for AI systems3, a similar request on GPAI model standards is yet to be drafted. When such a standardisation request will be issued depends largely on how effectively the Code of Practice can implement the relevant obligations under the AI Act. The standardisation process is also detailed in a previous blog post

Under the AI Act, obligations for General Purpose AI models, detailed in Articles 5055, are enforceable twelve months4 after the Act enters into force (August 2025). However, the European standardisation process, involving mostly5 the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC), can often take up to three years6. This process can take even longer with more technical standards, such as those for GPAI, and if drafted in coordination with International standards7, as prescribed in the AI Act8. The need for multi-stakeholder engagement and the consensus building approach further take up yet more time. Thus, GPAI model provider obligations are not likely to be operationalised as technical standards any time soon.

Code of Practice

Article 56 of the AI Act outlines the Code of Practice as a placeholder mode of compliance to bridge the gap between GPAI model provider obligations coming into effect (twelve months) and the adoption of standards (three years or more). While not legally binding, compliance with measures set out in the Codes of Practice by GPAI model providers will serve as a presumption of conformity with GPAI model provider obligations in Articles 53 and 55 until standards come into effect. These obligations include9:

  • Provision of technical documentation to the AI Office and National Competent Authorities
  • Provision of relevant information to downstream providers that seek to integrate their model into their AI or GPAI system (eg. capabilities and limitations)
  • Summaries of training data used
  • Policies for complying with existing Union copyright law

For GPAI models with systemic risk (models trained above the 10^25 FLOPS threshold),  further obligations include10:

  • State of the art model evaluations
  • Risk assessment and mitigation
  • Serious incident reporting, including corrective measures
  • Adequate cybersecurity protection

Providers who fail to comply with the Code of Practice will have to prove compliance to the above obligations to the Commission by alternative means11; possibly more burdensome and time-consuming. This situation is currently playing out in a parallel case in EU digital policy, with X (formerly Twitter) pulling out of the Strengthened Code of Practice on Disinformation12, which was recently integrated as a code of conduct under the co-regulatory framework of the Digital Services Act13

The Code of Practice is likely to form the basis for the GPAI standards. The contents of the Code therefore are intended to reflect the intentions of the AI Act faithfully, including with regard to health, safety and fundamental rights concerns.

Drafting process

Due to the twelve month timeline for GPAI model provider obligations coming into force, the Code of Practice must be drafted within nine months of the AI Act entering into force (by May 1st 2025). This is intended to give the Commission time to approve or reject them via implementing act14, and if the latter, to develop alternative methods of enforcement. 

In a recent communication, the Commission has outlined the structure of the drafting process, including a call for expression of interest towards a diverse range of stakeholders to participate in the Code of Practice Plenary, the opening of a multi-stakeholder consultation, and parallel dedicated “workshops” for providers and Plenary working group (WG) Chairs and Vice Chairs. 

The following table provides further information on each opportunity, deadline, participants involved and eligibility criteria of each participant:

OpportunityDeadlineParticipants*Eligibility Criteria
Participation in CoP drafting process (Plenary)Sunday, 25 August 2024, 18:00 CETChairs or Vice-Chairs of WGs (Academic or other independent expert)
• Demonstrated expertise in relevant areas
and
• Ability to fulfil the role (time commitments and operational experience)
and
• Independence**
GPAI model providers• Existing operations in the EU
or
• Planned operations in the EU 
• Downstream providers
• Other industry organisations
• Existing operations in the EU
or
• Planned operations in the EU
and
• Demonstrated legitimate interest
• Academia
• Independent experts
• Related organisations
Demonstrate relevant expertise

Can also volunteer for Chair/Vice Chair of WGs
Other stakeholder organisations (including civil society organisations representing a specific stakeholder group or organisations representing rightsholders)• Presence in the EU
and
• Demonstrated legitimate interest
and
• Demonstrated representation of a relevant stakeholder group
Multi-stakeholder consultationTuesday, 18 September 2024, 18:00 CETAll stakeholdersN/A
Provider Workshops*N/A• GPAI model providers
• Chairs/Vice chairs of WGs
N/A

* The Commission’s announcement also states that Member States’ representatives will be granted close involvement in the CoP drafting process.

** Independence of the Chairs or Vice-Chairs refers to “no financial interest or other interest, which could affect their independence, impartiality and objectivity”.

***The announcement also mentions “dedicated workshops” to accompany WGs (presumably with all WG members) to deeply explore specific topics.

Plenary

The Plenary, will be divided into the following four working groups, based on the different content categories outlined in the GPAI model section of the AI Act mentioned above:

  • Working Group 1: Transparency and copyright-related rules
    Detailing out documentation to downstream providers and the AI Office on the basis of Annexes XI and XII to the AI Act, policies to be put in place to comply with Union law on copyright and related rights, and making publicly available a summary about the training content.
  • Working Group 2: Risk identification and assessment measures for systemic risks
    Detailing the risk taxonomy based on a proposal by the AI Office and identifying and detailing relevant technical risk assessment measures, including model evaluation and adversarial testing.
  • Working Group 3: Risk mitigation measures for systemic risks
    Identifying and detailing relevant technical risk mitigation measures, including cybersecurity protection for the general-purpose AI model and the physical infrastructure of the model.
  • Working Group 4: Internal risk management and governance for general-purpose AI model providers
    Identifying and detailing policies and procedures to operationalise risk management in internal governance of general-purpose AI model providers, including keeping track of, documenting, and reporting serious incidents and possible corrective measures.

Source: European Commission

Once eligible participants are selected and placed into each WG (participants may be in multiple WGs), the Plenary will get together for a “Kick-off”, expected for September 2024. Here the drafting process details will be further ironed-out, and a first draft of the Code of Practice, based on inputs from the multi-stakeholder consultation will be shared. Following this, three remote WG session drafting rounds will take place over the course of the 9-month drawing up period. 

Figure 1: CoP drafting process – Source: European Commission

Following the three drafting rounds, and at the end of the 9 month period, in April 2025, the final version of the Code of Practice will be shared in a Closing Plenary. 

In parallel, dedicated workshops featuring GPAI model providers and WG Chair/Vice-Chairs will meet to inform each iterative drafting round. This extra level of involvement is justified by the Commission based on GPAI model providers being the “main addressees” of the Code of Practice, and thus the most consequential stakeholder. To ensure the transparency of these additional workshops, the AI Office has committed to sharing meeting minutes to all Plenary participants throughout the process. 

Figure 2: CoP drafting timeline – Source: European Commission

Chairs and Vice-Chairs

A crucial component of the Code of Practice drafting process will be the roles of Chair and Vice-Chairs of each WG, as well as a coordinating Chair who will ensure coherence over all four WGs. As mentioned in the table above, the Chairs could be academia or other independent experts, and must fulfil a rigorous set of criteria, including technical expertise, time availability, operational experience, and independence from conflicting financial or other interests. They will be primarily responsible for collating the input from all stakeholders into one succinct Code of Practice; a role often referred to as “the pen holder”. 

Other tasks for the Chair and Vice-Chairs include:

  • Synthesising submissions by other Plenary participants 
  • Moderating, scheduling and facilitating WG discussions
  • Facilitate the delivery and quality of efficient drafting
  • Setting and carrying out agendas
  • Participating in separate workshops, monitoring progress and regularly reporting on relevant activities

Unsurprisingly, for such consequential positions, the time commitment necessary is significant. Although Chairs and Vice-Chairs will be logistically and administratively supported by external contractors (likely a consortium of consultancies) as well as the AI Office, the roles will require weekly commitments and intensive working periods throughout the 9-month period from September 2024 to April 2025. 

In addition, for financial independence reasons, the Chair or Vice-Chair positions are all unpaid (as well as all Plenary participants). Nevertheless, these positions are likely to be hugely significant towards producing a final Code of Practice that is both technically feasible and protective of EU citizens’ health, safety and fundamental rights. Expressions of interest  are to be made via the Plenary application by Sunday 25th August.

Code of Practice content

Although the broad structure of the drafting process has been announced, the actual content of the Code, and how it may be structured, is still an open question to be answered at the Plenary kick-off. Thus, the following description is speculation informed by similar previous processes.

The content produced by each working group may be structured similarly to the Strengthened Code of Practice on disinformation, where relevant signatories15 agree to high-level commitments, containing sub-level measures that can be further detailed as Qualitative Reporting Elements (QRE) or Service Level Indicators (SLI), or alternatively as Key Performance Indicators (KPI) that would include both16. This approach, incorporating a wide range of topics into specific WGs, involving all relevant stakeholders, and featuring a wide array of guidelines from highly general to highly specific, is intended to result in a comprehensive and complete Code of Practice.

Additional tools used in previous Code of Practice include Structural Indicators17 (used in parallel with KPIs), which are designed to mitigate systemic risks by measuring outputs. Examples of Structural Indicators could include “frequency and duration of GPAI model failures or outages”, or “frequency of cyberattacks involving GPAI-generated content (e.g., phishing, malware)”. Finally, as of yet nothing has been announced regarding processes for post-Code of Practice monitoring and updating. Following the first iteration, a permanent task-force may be set up, featuring signatories of the Code and other oversight bodies, to review and adapt the Code based on technological, societal, market and legislative developments. This is in line with the AI Act itself, which prescribes the AI office to facilitate the “frequent review and adaptation of the codes of practice”18.

Next steps

With the two important stakeholder participation windows now open, the Plenary and the multi-stakeholder consultation, all interested parties now have the opportunity to feed into this highly consequential drafting process. In the meantime, eligible potential Chairs and Vice-Chairs must also express their interest and ensure their practical matters are in order.

Notes & References

  1. European Council approves AI Act ↩︎
  2. EUR-Lex ↩︎
  3. Standardisation request for AI systems ↩︎
  4. Article 113(b) ↩︎
  5. The European Telecommunications Standards Institute (ETSI) may also be involved. ↩︎
  6. CEN-CENELEC ↩︎
  7. ISO ↩︎
  8. Article 40(3) ↩︎
  9. Article 53 ↩︎
  10. Article 55 ↩︎
  11. Article 53(4) and 55(2) ↩︎
  12. Internal Market Commissioner Thierry Breton insisting on pursuing X’s compliance via alternative means. ↩︎
  13. Strengthened Code of Practice on disinformation ↩︎
  14. Article 56(9) ↩︎
  15. Companies, organisations and associations involved in the drafting process and listed in an Annex of the final Codes. ↩︎
  16. QREs are qualitative reporting standards, whilst SLIs are quantifiable metrics. QREs and SLIs are used in the Codes of Practice on disinformation. KPIs are specifically mentioned in Article 56 of the AI Act. ↩︎
  17. Strengthened Codes of Practice on disinformation ↩︎
  18. Article 56(8) ↩︎
This post was published on 3 Jul, 2024

Related articles

The AI Act: Responsibilities of the European Commission (AI Office)

If you are unsure who is implementing and enforcing the new digital law and what the specific time frames are, you might find this post—and our post on the responsibilities of the EU Member States—very helpful. The tables below provide a comprehensive list of all...

The AI Act: Responsibilities of the EU Member States

If you are unsure who is implementing and enforcing the EU AI Act and what the specific time frames are, you might find this post—and our post on the responsibilities of the European Commission (AI Office)—very helpful. The tables below provide you with a...

Why work at the EU AI Office?

It’s probably not for everyone, but there are a lot of great reasons to consider, including the potential to have an impact on AI governance worldwide, leveraging the first-mover advantage, and more.

The AI Office is hiring

The European Commission is recruiting contract agents who are AI technology specialists to govern the most cutting-edge AI models.  Deadline to apply is 12:00 CET on 27 March (application form).  Role This is an opportunity to work in a team within the...