An introduction to Codes of Practice for the AI Act

03 Jul, 2024

As AI Act implementation gets underway, it is important to lift the veil on different mechanisms of enforcement included in the regulation.

This summary, detailing the Codes of Practice for General Purpose AI model providers, was put together by Jimmy Farrell, EU AI policy lead at Pour Demain. For further questions please reach out to him at jimmy.farrell@pourdemain.eu

Introduction

This blog post explains the concept, process and importance of the Codes of Practice, an AI Act tool to bridge the interim period between obligations for General Purpose AI (GPAI) model providers coming into force and the eventual adoption of harmonised European GPAI model standards. Little has yet been officially communicated about the Codes of Practice under the AI Act. As such, this blog post incorporates publicly available information and reflections on how the process is likely to play out, based on past experiences with similar policy frameworks.

Standards and the need for Codes of Practice 

Following the final approval of the AI Act by the European Council on May 21st1, the Act is set to be published in the EU’s Official Journal in July. Twenty days later it will come into force, likely in August, with obligations then phased-in gradually over three years, as detailed in an earlier blog post. In the meantime, the complex process of developing harmonised European Standards, in order to operationalise the Act’s obligations, has begun. Whilst an official standardisation request has been adopted by the Commission and approved by CEN-CENELEC regarding standards for AI systems2, a similar request on GPAI model standards is yet to be drafted. When such a standardisation request will be issued depends largely on how effectively the Codes of Practice can implement the relevant obligations under the AI Act. The standardisation process is also detailed in a previous blog post

Under the AI Act, obligations for General Purpose AI models, detailed in Articles 5055, are enforceable twelve months3 after the Act enters into force (roughly August 2025). However, the European standardisation process, involving mostly4 the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC), can often take up to three years5. This process can take even longer with more technical standards, such as those for GPAI, and if drafted in coordination with International standards6, as prescribed in the AI Act7. The need for multi-stakeholder engagement and the consensus building approach further take up yet more time. Thus, GPAI model provider obligations are not likely to be operationalised as technical standards any time soon.

Codes of Practice

Article 56 of the AI Act outlines the Codes of Practice as a placeholder mode of compliance to bridge the gap between GPAI model provider obligations coming into effect (twelve months) and the adoption of standards (three years or more). While not legally binding, compliance with measures set out in the Codes of Practice by GPAI model providers will serve as a presumption of conformity with GPAI model provider obligations in Articles 53 and 55 until standards come into effect. These obligations include8:

  • Provision of technical documentation to the AI Office and National Competent Authorities
  • Provision of relevant information to downstream providers that seek to integrate their model into their AI or GPAI system (eg. capabilities and limitations)
  • Summaries of training data used
  • Policies for complying with existing Union copyright law

For GPAI models with systemic risk (models trained above the 1025 FLOPS threshold),  further obligations include9:

  • State of the art model evaluations
  • Risk assessment and mitigation
  • Serious incident reporting, including corrective measures
  • Adequate cybersecurity protection

Providers who fail to comply with the Codes of Practice will have to prove compliance to the above obligations to the Commission by alternative means10, likely to be more burdensome and time-consuming. This situation is currently playing out in a parallel case in EU digital policy, with X (formerly Twitter) pulling out of the Strengthened Codes of Practice on Disinformation11, which was recently integrated as a code of conduct under the co-regulatory framework of the Digital Services Act12

The Codes of Practice are likely to form the basis for the GPAI standards. The contents of the Codes therefore are intended to reflect the intentions of the AI Act faithfully, including with regard to health, safety and fundamental rights concerns, in order to prevent insufficient measures becoming entrenched in standards.  

Drafting process

Due to the twelve month timeline for GPAI model provider obligations coming into force, the Codes of Practice must be drafted within nine months of the AI Act entering into force. This is intended to give the Commission ample time to approve or reject them via implementing act13. The AI Act also lays out various options that the AI Office can pursue on behalf of the Commission with regard to the actors responsible for drafting the Codes of Practice14:

“The AI Office may invite all providers of general-purpose AI models, as well as relevant national competent authorities, to participate in the drawing-up of codes of practice.”  

And:

“Civil society organisations, industry, academia and other relevant stakeholders, such as downstream providers and independent experts, may support the process.”

While little information has been made public about the organisation and set up of the Codes of Practice, similar frameworks in the past15 have involved the formation of multi-stakeholder working groups (WGs) divided by topic, based on relevant obligations. This set of Codes of Practice would therefore include WGs based on obligations in articles 53 and 55 (listed above). A hypothetical example structure could include WGs on:

  • Copyright and training data 
  • Technical documentation and downstream information sharing
  • Content provenance and labelling
  • Model evaluations and adversarial testing
  • Systemic risk assessment and mitigation 
  • Systemic risk taxonomy
  • Serious incident monitoring
  • Cybersecurity
  • Ex-post implementation and monitoring

Within WGs, certain participants (eg. providers, national competent authorities, civil society organisations, industry, academia, downstream providers, independent experts etc.) may act as chairs or co-chairs, as drafters, or simply as monitors. The content produced by each working group is likely to be structured similarly to the Strengthened Codes of Practice on disinformation, where relevant signatories16 agree to high-level commitments, containing sub-level measures that can be further detailed as Qualitative Reporting Elements (QRE) or Service Level Indicators (SLI), or alternatively as Key Performance Indicators (KPI) that would include both17. This approach, incorporating a wide range of topics into specific WGs, involving all relevant stakeholders, and featuring a wide array of guidelines from highly general to highly specific, is intended to result in a comprehensive and complete Codes of Practice.

Additional tools used in previous Codes of Practice include Structural Indicators18 (used in parallel with KPIs), which are designed to mitigate systemic risks by measuring outputs. Examples of Structural Indicators could include “frequency and duration of GPAI model failures or outages”, or “frequency of cyberattacks involving GPAI-generated content (e.g., phishing, malware)”. Finally, a permanent task-force may be set up, featuring signatories of the Codes and other oversight bodies, to review and adapt the Codes based on technological, societal, market and legislative developments.

Next steps

With the Act soon to be published, more information can be expected before long regarding the drafting process, which actors will play what roles, and relevant timelines. As mentioned earlier, the Codes of Practice must be completed within nine months of the act coming into force, giving the Commission an extra three months (thus twelve in total) to approve or reject the Codes via implementing act, based on advice from the AI Office and Board19. The AI Act also prescribes the AI office to facilitate “frequent review and adaptation of the codes of practice20. In light of a standardisation process that will take longer than AI Act timelines permit, Codes of Practice for GPAI model providers are a useful tool to ensure the faithful implementation of the regulation.

This blog post will be updated as new information becomes available.

Notes & References

  1. European Council approves AI Act ↩︎
  2. Standardisation request for AI systems ↩︎
  3. Article 113(b) ↩︎
  4. The European Telecommunications Standards Institute (ETSI) may also be involved. ↩︎
  5. CEN-CENELEC ↩︎
  6. ISO ↩︎
  7. Article 40(3) ↩︎
  8. Article 53 ↩︎
  9. Article 55 ↩︎
  10. Article 53(4) and 55(2) ↩︎
  11. Internal Market Commissioner Thierry Breton insisting on pursuing X’s compliance via alternative means. ↩︎
  12. Strengthened Codes of Practice on disinformation ↩︎
  13. Article 56(9) ↩︎
  14. Article 56(3) ↩︎
  15. Codes of Practice on disinformation ↩︎
  16. Companies, organisations and associations involved in the drafting process and listed in an Annex of the final Codes. ↩︎
  17. QREs are qualitative reporting standards, whilst SLIs are quantifiable metrics. QREs and SLIs are used in the Codes of Practice on disinformation. KPIs are specifically mentioned in Article 56 of the AI Act. ↩︎
  18. Strengthened Codes of Practice on disinformation ↩︎
  19. The AI Board will be made up of all EU Member State representatives. Article 56(6) ↩︎
  20. Article 56(8) ↩︎
This post was published on 3 Jul, 2024

Related articles

Why work at the EU AI Office?

It’s probably not for everyone, but there are a lot of great reasons to consider, including the potential to have an impact on AI governance worldwide, leveraging the first-mover advantage, and more.

The AI Office is hiring

The European Commission is recruiting contract agents who are AI technology specialists to govern the most cutting-edge AI models.  Deadline to apply is 12:00 CET on 27 March (application form).  Role This is an opportunity to work in a team within the...

The AI Office: What is it, and how does it work?

In this overview, we offer a summary of the key elements of the AI Office relevant for those interested in AI governance. We’ve highlighted the responsibilities of the AI Office, its role within the European Commission, its relationship with the AI Board, its national...

AI Act Implementation: Timelines & Next steps

In this article we provide an outline of the key dates relevant to the implementation of the AI Act. We also list some secondary legislation that the Commission might add to supplement the AI Act, and some guidelines it may publish to support compliance efforts....