Providers of General-Purpose AI Models — What We Know About Who Will Qualify

25 Apr, 2025

On 22 April 2025, the AI Office published preliminary guidelines clarifying the scope of the obligations for providers of GPAI models. These outline seven topics that are expected to be covered in the final guidelines along with some preliminary answers. The Commission also opened a consultation for input on the guidelines from stakeholders. 

This post gives an overview of why the category of GPAI model provider matters and summarises the content of the preliminary guidelines as of April 2025.

The value chain of general-purpose AI is notoriously complex and defining appropriate categories for entities across the value chain was no easy feat during the AI Act drafting process.1 The AI Act lays down different obligations for providers of general-purpose AI models (GPAI models) and providers of AI systems including general-purpose AI systems (GPAI systems). Therefore, it will be important for actors developing, building on or integrating GPAI models to identify if and when they may qualify as a provider of such a model, as opposed to a downstream provider or deployer of an AI system.

Preliminary Guidelines PDF | European Commission, 22 April 2025 • View document


Quick summary: the preliminary guidelines for GPAI models

  • Context: On 22nd of April 2025, the AI Office published a set of preliminary guidelines clarifying the scope of the obligations for providers of GPAI models.
  • Content: There are seven topics that are expected to be covered in the final guidelines: 
  1. What is a GPAI model? 
  2. Who are providers of GPAI models and when is a downstream modifier a provider?
  3. Clarifying ‘placing on the market of GPAI models’ and open-source exemptions. 
  4. Estimating training compute
  5. Transitional rules, grandfathering, and retroactive compliance
  6. Effects of adherence to and signature of the Code of Practice
  7. Supervision and enforcement of the GPAI model rules
  • Consultation deadline: All interested stakeholders can provide feedback on the preliminary guidelines through an open consultation until 22 May.

Why the GPAI model provider category matters

The AI Act lays down particular obligations for providers of general-purpose AI models (GPAI models) in Article 53. These include keeping up-to-date model documentation and implementing a copyright policy. Providers of so-called GPAI models with systemic risk (GPAISR models) must comply with Article 53 as well as Article 55. The latter entails conducting model evaluations, adversarial testing, tracking and reporting serious incidents, and ensuring cybersecurity protections. How actors can comply with these obligations in practice is being detailed in the Code of Practice, as explained in another blog post. In contrast, downstream actors like system providers, deployers, importers, etc. are not subject to these obligations. Rather, these actors should consider whether the risk-based obligations under Articles 5, 16-27 and 50 apply.

1) What is a GPAI model?

A ‘general-purpose AI model’ is defined in Article 3(63) as an AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks. Further, it can be integrated into a variety of downstream systems or applications.2 The model release mode (open weights, API, etc.) does not matter for the sake of this definition, except if the model is solely used for research, development or prototyping activities prior to market placement.3 Recital 97 emphasises that while AI models may be essential parts of AI systems they do not constitute AI systems in themselves, as models require further components, such as a user interface, to become AI systems.

The preliminary approach of the AI Office is to set a threshold in terms of computational resources used to train or modify a model (training compute). In particular, the AI Office proposes to combine the number of parameters and the amount of training data into one number. If a model that can generate text and/or images uses training compute greater than 10^22 floating point operations (FLOP), the AI Office would presume that it is a GPAI model.

The AI Office sees the pre-training run as the beginning of a lifecycle for GPAI models. Later in the lifecycle GPAI models can be ‘modified’, including through ‘fine-tuning’. Such modifications can be carried out by the same entity that provided the original GPAI model or by ‘downstream modifiers’. In Recital 97, the AI Act states that GPAI models ‘may be further modified or fine-tuned into new models’. This begs the question: what kinds of modifications will qualify as a new GPAI model? The preliminary guidelines give us answers, both with regards to the same entity and downstream modifiers (see section 2). 

Modification by the same entity providing the original GPAI model: are considered to lead to a distinct model if those modifications use more than ⅓ of the compute required for a model to be presumed to be a GPAI model. With the current thresholds, that would be 3*10^21 FLOP. 

2) Who is the provider of a general-purpose AI model, and when is a downstream modifier a provider?

Under Article 3(3), ‘providers’ are natural or legal persons, public authorities, agencies, or other bodies that develop an AI system or a GPAI model or have such a system or model developed and place it on the market under its own name or trademark, whether for payment or for free.4 Thus, under the AI Act, a provider can provide two kinds of products – AI systems or GPAI models. The obligations of the provider hinge on whether their product qualifies as a system or a model. The preliminary document is only concerned with providers of GPAI models, including GPAI models with systemic risk, and not with providers of AI systems.

Modifications by a downstream modifier: The AI Office suggests that only those modifications that have a significant bearing on the rationales behind the obligations for GPAI models should lead to the downstream modifier being considered a provider of a new GPAI model. For example, for GPAI models with systemic risk, only modifications leading to a significant change in systemic risk should make downstream modifiers providers of GPAI models with systemic risk. To provide concrete guidance and increase legal certainty, the AI Office proposes computational thresholds with related presumptions of provider status.

GPAI models: a downstream modifier is presumed to be the provider of a GPAI model if the modifications exceed ⅓ of the compute required for a model to be presumed to be a GPAI model. With the current thresholds, that would be 3*10^21 FLOP. This is similar to the threshold for the same entity providing the original model. However, the provider obligations are limited to the modification conducted in line with Recital 109 in the AI Act.

GPAI models with systemic risk:5 here the AI Office proposes two conditions of which only one has to be met.

  1. When the original model is presumed to be a GPAI model with systemic risk, a downstream modifier is presumed to become a provider if the amount of compute is greater than ⅓ of the compute threshold for a model to be presumed to be a GPAI model with systemic risk. With the current threshold, that is 3*10^24 FLOP.
  2. When the original model was not a GPAI model with systemic risk, a downstream modifier is presumed to become a provider if the cumulative amount of compute from the original model and the modification exceeds the threshold for presumption that a model is a GPAI model with systemic risk. The current threshold is 10^25 FLOP

As of today, the AI Office assumes that no downstream modification significantly changes systemic risk. The AI Office further assumes that few or no modifications meet the specified threshold. The threshold in point 1 is thus forward-looking and in line with the risk-based approach of the AI Act.

Note that if a downstream modifier of a GPAI model becomes the provider of a GPAI model with systemic risk based on one of the above criterions, then its obligations are not limited to the modification conducted

3) What constitutes a placing on the market of a general-purpose AI model, and when do the open-source exemptions apply?

The AI Office provides a list of examples of what qualifies as placing a GPAI model on the market, including making it available through software libraries, application programming interfaces (APIs), cloud computing services, and similar distribution channels.

The preliminary guidelines also clarify when the exemptions for certain open-source releases apply by proposing definitions for the central concepts of ‘access’, ‘usage’, ‘modification’, ‘distribution’, and ‘free and open-source’. The document highlights that GPAI models with systemic risk are not exempt regardless of their release method.

4) Estimating the computational resources used to train or modify a model

Several of the above topics rely heavily on estimations of the computational resources used to train or modify a model (i.e., compute). Therefore, it is important that the preliminary guidelines include a section on estimating compute. The AI Office proposes two approaches: the hardware-based approach and the architecture-based approach. These are explained with formulas and with concrete examples from the industry in Annex A.1. 

What should be counted: the preliminary guidelines propose that the cumulative training compute is restricted to activities and methods carried out as part of the training of the model or directly feeding into the training. It thus excludes activities that are prior to the large pre-training run or which improve the model’s capabilities at inference time.

When should compute be counted: the AI Office expects providers to estimate the amount of pre-training compute ahead of commencing their large pre-training run and notify the Commission accordingly within two weeks of the estimate.

5) Transitional rules, grandfathering, and retroactive compliance

The AI Office recognises that the GPAI obligations in the AI Act can present challenges for companies who want to comply, especially in the early stages. The preliminary approach of the AI Office is to encourage GPAI model providers to enter into dialogue and get support from the AI Office early if they foresee difficulties in complying. For example, when companies are to provide a GPAI model with systemic risk for the first time on the European market, the AI Office will give special consideration to their challenging situation and in setting any deadlines. 

6) Effects of adherence to and signature of the Code of Practice

The preliminary approach of the AI Office indicates that signatories to the Code of Practice will benefit from increased trust by the Commission. Further, commitments under the Code of Practice may be taken into account as a mitigating factor when fixing the amount of fines. 

Non-signatories must demonstrate how they comply with their obligations under the AI Act via other adequate, effective, and proportionate means. They may also be subject to more requests for information and access to conduct model evaluations, since there will be less clarity regarding how they ensure compliance.

7) Supervision and enforcement of the general-purpose AI rules

The AI Office is in charge of supervising and enforcing obligations related to providers of GPAI models. The preliminary guidelines emphasise a collaborative and proportionate approach. This includes close informal cooperation with providers during the training to streamline compliance and ensure market placement without delays, especially for providers of GPAI models with systemic risk. The AI Office expects that providers of GPAI models with systemic risk report proactively without requests. 

The road ahead to increased clarity

The preliminary guidelines are an important first step, confirming that downstream industry is out of scope for obligations relating to GPAI models with systemic risk with the current thresholds. No publicly available model meets the modification threshold of one third of the GPAI model classification thresholds. With this targeted approach to GPAI models, the Commission weeds out central uncertainties and assures European downstream companies that they are unlikely to be in scope for the Code of Practice.

The guidelines are expected to evolve over time and will be updated as necessary, in particular in light of evolving technological developments. As the obligations for GPAI model providers begin to apply from 2 August 2025, pragmatic and cooperative enforcement by the AI Office of Article 53 and 55 will bring more clarity. Ultimately, the authoritative interpretation of the AI Act may only be given by the Court of Justice of the European Union.


Notes & References

  1. The categories changed several times during the drafting. For example, the notion of ‘user’ was included in the original Commission proposal but has been replaced with the notion of ‘deployer’ in the final AI Act and the notion of ‘small-scale provider’ was removed altogether. Further, ‘downstream provider’ did not appear in any of the proposals from the three EU institutions and only appeared in the final version. ↩︎
  2. Article 3(63) ↩︎
  3. Recital 97 ↩︎
  4. Article 3(3) ↩︎
  5. Under Article Article 3(65), ‘systemic risk’ is defined as specific to the high-impact capabilities of GPAI models, having a significant impact on the Union market. This could be due to their reach or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain. Models are presumed to qualify as having high impact capabilities when the cumulative amount of computation used for its training is greater than 10(^25) floating point operations (FLOP). This is a rebuttable presumption that does not automatically qualify models. Currently, it is estimated that 11 providers worldwide provide models that surpass this threshold. ↩︎
This post was published on 25 Apr, 2025

Related articles

The AI Office is hiring a Lead Scientific Advisor for AI

This opportunity has now passed. A very important job opening has opened up at the European AI Office: They are hiring for the Lead Scientific Advisor for AI. Application deadline is 13 December 2024. Based on the European Union Employment Advisor, the monthly basic...

The AI Act: Responsibilities of the European Commission (AI Office)

If you are unsure who is implementing and enforcing the new digital law and what the specific time frames are, you might find this post—and our post on the responsibilities of the EU Member States—very helpful. The tables below provide a comprehensive list of all...