Small Businesses’ Guide to the AI Act

19 Feb, 2025

Everything you need to know about the AI Act, for small and medium-sized enterprises (SMEs) in the EU and beyond.

The AI Act has a particular focus on small and medium-sized enterprises (SMEs). This group of stakeholders is mentioned 38 times in the Act compared to 7 mentions of ‘industry’ and 11 mentions of ‘civil society’. More importantly, the EU AI Act has a range of measures that are specifically designed to support and simplify SME compliance with the product safety rules of the AI Act.

Quick summary of provisions tailored to SMEs

  • Regulatory sandboxes: frameworks for testing AI products and services outside normal regulatory structures, with exemptions from administrative fees. Testing may also be facilitated in real world conditions. SMEs will have priority access to sandboxes free of charge, and the procedures shall be simple and clear.
  • Reducing compliance costs and fees: assessment fees shall be proportional to the size of SMEs and the Commission will regularly assess and work to lower compliance costs.
  • Standard setting and governance: the Commission and Member States shall facilitate participation of SMEs in standard setting and in the AI advisory forum.
  • Simplified documentation and training: the Commission will develop simplified SME technical documentation forms that are accepted by national authorities for conformity assessments and provide training activities tailored to SMEs to support compliance.
  • Dedicated communication: guidance and response to queries through dedicated channels to support SMEs in complying with the AI Act.
  • Proportionality: obligations for providers of general-purpose AI models should be commensurate and proportionate to the type of model provider. For example, there will be separate Key Performance Indicators for SMEs under the Code of Practice.

We expand upon each of these provisions in the sections below.

The category of ‘SMEs’ under EU law

Under EU law, SMEs are an overarching category of enterprises consisting of three subcategories. Medium-sized enterprises have less than 250 employees and an annual turnover of less than €50 million and/or not more than €43 million on their annual balance sheet. Small enterprises employ less than 50 persons and have an annual turnover and/or balance of less than €10 million. Microenterprises employ less than 10 persons and have an annual turnover and/or balance of less than €2 million. Note that the AI Act explicitly mentions start-ups as part of SMEs throughout the act, even though there is currently no separate or single definition of a start-up under EU law.


AI Act provisions tailored to SMEs

Regulatory sandboxes

All Member States will adopt at minimum one national regulatory sandbox. Regulatory sandboxes are used to test innovative products, technologies and services for a limited time under regulatory supervision outside normal regulatory structures. The concept is used in a range of industries including fintech, transport, energy, telecoms and health, in many different jurisdictions including the UK, Japan and Singapore. With regard to the AI Act, a regulatory sandbox is a framework that lets providers of AI systems lawfully develop, train, validate and test novel AI systems by following a sandbox plan agreed between the provider and the supervising authority. These sandboxes could be physical, digital, or hybrid. Testing in real world conditions may also be facilitated through the framework of AI regulatory sandboxes. The sandboxes are designed to support innovation by enabling a controlled experimentation environment to demonstrate compliance, increasing legal certainty for both innovators and authorities, and removing barriers to access markets for SMEs.

The documentation from participating in a sandbox can be used to demonstrate compliance with the AI Act. Further, if the prospective providers observe the sandbox plan and terms and conditions and follow in good faith the guidance of the national competent authority, they will not face administrative fines for infringements of the Act. Note that providers in the AI regulatory sandboxes are not exempt from liability for damages to third parties caused by experimentation with AI systems in a sandbox.

SMEs will have priority access to sandboxes. Moreover, these sandboxes shall be free of charge for SMEs and the procedures for application, selection, participation, and exiting the sandboxes shall be simple, easy to understand and communicated in a clear way. 

Examples of sandboxes: Several EU countries have already established AI sandboxes, including Luxembourg, Spain and Lithuania. While these sandboxes are nascent, lessons from other fields indicate some of the potential positive impacts of sandboxes. For example, companies that completed successful testing with the UK FCA sandbox received 6.6 times higher fintech investment. Further, compared with the regulator’s standard authorisation time, the UK FCA sandbox increased the average speed for market authorisation by 40%.

Reducing compliance costs and fees

The AI Act is focussed on limiting compliance costs for small actors, for example by requiring that national conformity assessment fees shall consider the needs of SME providers and ensure that those fees are proportional to the size, market size and other relevant factors. The European Commission will also carry out assessments of compliance costs for SMEs and collaborate with Member States to lower these costs. For example, with regard to translation costs related to mandatory documentation, Member States should try to ensure that they accept documentation and communication in languages broadly understood by the largest possible number of cross-border deployers.

In relation to fines, the Act sets the upper bound of fines based on whichever is higher – a fixed amount or a fixed percentage of total worldwide turnover. However, in the case of SMEs, the upper bound is set by whichever is lower.

Participation in standard setting and governance

Standards are an important part of any product safety legislation in the EU, and the AI Act is no exception. To ensure that the perspectives of SMEs are duly weighed in the standard setting process, the Commission and Member States must facilitate the participation of SMEs in the standardisation development process. 

The AI Act also ensures representation of SMEs in the AI Act implementation. For example, SMEs must be represented in the advisory forum, a body which advises and provides technical expertise to the European AI Board and the Commission.

Simplified documentation and targeted training

To simplify the technical documentation of high-risk AI systems for SMEs, the Commission will develop special, simplified technical documentation forms for the needs of small and microenterprises. These will be accepted by national authorities for the purposes of conformity assessments. With regard to microenterprises, certain elements of quality management systems for high-risk AI systems may be complied with in a simplified manner. Further, Member States must organise awareness raising and training activities tailored to SMEs regarding the application of the AI Act to support SMEs in understanding and complying with the AI Act.

Dedicated SME communication

Member States shall ensure dedicated communications channels for SMEs and other relevant actors, like local public authorities, to support SMEs throughout their development path. This support includes providing guidance and responding to queries about the implementation of the AI Act, ensuring synergies and homogeneity in the guidance to SMEs. Several Member States have already established relevant information channels, for example the Austrian Service Desk for AI.

Proportional obligations for SME providers of general-purpose AI models

Another aspect of the AI Act designed to support SMEs is the principle of proportionality. For providers of general-purpose AI models, the obligations should be “commensurate and proportionate to the type of model provider”. General-purpose AI models show significant generality, are capable of competently performing a range of different tasks, and can be integrated into a range of downstream systems or applications (Art. 3(63) AIA). The way these are released on the market (open weights, proprietary, etc) does not affect the categorisation.

A small subset of the most advanced general-purpose AI models are the so-called ‘general-purpose AI models with systemic risk’. That is, models trained using enormous amounts of computational power (more than 10^25 FLOP) with high-impact capabilities that have significant impact on the Union market due to their reach or negative effects on public health, safety, public security, fundamental rights or society as a whole (Art. 3(65) AIA). According to Epoch, there are only 15 models globally that surpass the compute threshold of 10^25 FLOP as of February 2025. These include models like GPT-4o, Mistral Large 2, Aramco Metabrain AI, Doubao Pro and Gemini 1.0 Ultra. Examples of smaller general-purpose AI models that would likely not qualify as having systemic risk include GPT 3.5, the models developed by Silo AI, Aleph Alpha’s Pharia-1-LLM-7B or Deepseek-V3.

The obligations of providers of general-purpose AI models and general-purpose AI models with systemic risk are laid down in Article 53 and 55 of the AI Act respectively, and fleshed out in the Code of Practice. Providers of general-purpose AI models have certain transparency obligations. Providers of general-purpose AI models with systemic risk have additional obligations to evaluate and test models, assess and mitigate possible systemic risk, carry out incident reporting and ensure adequate levels of cybersecurity. The Code is currently being drafted in an extensive multi-stakeholder process, so the final shape is yet to be determined. Because of the principle of proportionality, the Code should take due account of the size of the general-purpose AI model provider. This is recognised, for example, in the current second draft as one of the seven high-level principles, and is reflected in separate Key Performance Indicators for SMEs compared to larger companies.

Important note: For the purpose of compliance by downstream providers and deployers who are building applications or otherwise integrating general-purpose AI models into AI systems, the distinction between general-purpose AI models and general-purpose AI models with systemic risk does not matter. Here, the only thing that matters is the intended use of their AI system and whether this use falls under the scope of any of the risk categories in the AI Act: prohibited systems, high-risk systems, or systems with special transparency obligations. This will be the case for the vast majority of SMEs in the EU.

It all depends on implementation

Ultimately, the effects and ease of compliance for SMEs depend as much on the implementation of the AI Act as on the text itself. There are different resources that can help readers track implementation, including:

This post was published on 19 Feb, 2025

Related articles

The AI Office is hiring a Lead Scientific Advisor for AI

This opportunity has now passed. A very important job opening has opened up at the European AI Office: They are hiring for the Lead Scientific Advisor for AI. Application deadline is 13 December 2024. Based on the European Union Employment Advisor, the monthly basic...

The AI Act: Responsibilities of the European Commission (AI Office)

If you are unsure who is implementing and enforcing the new digital law and what the specific time frames are, you might find this post—and our post on the responsibilities of the EU Member States—very helpful. The tables below provide a comprehensive list of all...

The AI Act: Responsibilities of the EU Member States

If you are unsure who is implementing and enforcing the EU AI Act and what the specific time frames are, you might find this post—and our post on the responsibilities of the European Commission (AI Office)—very helpful. The tables below provide you with a...