Browse the full text of the AI Act online using our AI Act Explorer

EU AI Act Compliance Checker

The EU AI Act will soon introduce new obligations. Use our interactive tool to determine whether or not your AI system will be subject to these.

If you would like to stay updated about your obligations under the EU AI Act, we recommend subscribing to the EU AI Act Newsletter.

For further clarity, we recommend that you seek professional legal advice and follow national guidance. More information about EU AI Act enforcement in your country will likely be provided in 2024.

Please note: The EU AI Act is still under negotiation. This tool is therefore only our best estimate of the legal obligations you may have under the final legislation. We will continue to update the tool to reflect any changes to the proposed Act.

Feedback – We are working to improve this tool. Please send feedback to Risto Uuk at risto@futureoflife.org

11 March 2024 – Updated to reflect the ‘Final draft’ of the AI Act.

Changelog

26 March 2024

  • Fixed a bug which prevented Deployers from receiving their obligations under Article 29.
  • Removed the following notice in the ‘High-risk Obligations’ result: “An additional exclusion for systems that are “purely accessory in respect of the relevant action or decision to be taken” will be defined no later than one year after the AI Act is introduced.” This exclusion is no longer present in Article 7 of the final draft.

11 March 2024

  • Updated the Compliance Checker to reflect the ‘Final draft’ of the AI Act.

Data Privacy: We are required to store your inputs from this form, including your email address, in order to email your results to you. If you do not use the email function, none of your data will be stored. We do not store any other data that could be used to identify you or your device. If you wish to remain anonymous, please use an email address that does not reveal your identity. We do not share any of your data with any other parties. If you would like your data to be deleted from our servers, please contact taylor@futureoflife.org

Frequently Asked Questions

Who developed this tool, and why?

This tool is developed by the team at the Future of Life Institute. We are in no way affiliated with the European Union. We have developed this tool voluntarily in order to aid the effective implementation of the AI Act, because we believe the AI Act supports our mission to ensure that AI technology remains beneficial for life, and avoids extreme large-scale risks.

Can I integrate this tool in my own website?

Our tool is built in a way that makes it difficult to support implementation on other sites. We do not have an API available. Therefore, for almost all cases, we suggest that you make this tool available to users of your website by linking to this webpage – you are welcome to use these mockup image to feature the tool on your site.

When does the EU AI Act come into effect?

The European Commission is now supporting the Council of the European Union and the European Parliament in concluding inter-institutional negotiations (trilogue) – the last phase of negotiations before the EU AI Act is passed. This is expected to be finished by the end of 2023 or early 2024. Once the regulation is officially passed, there will be an implementation period of two to three years depending on how the negotiations between the EU institutions unfold. This means that the AI Act will likely be enforced in 2026 or later. In addition, during the implementation period, the European standards bodies are expected to develop standards for the AI Act.

What are the categories of risk defined by the EU AI Act?

The Act’s regulatory framework defines four levels of risk for AI systems: unacceptable, high, limited, and minimal or no risk. Systems posing unacceptable risks, such as threatening people’s safety, livelihood, and rights – from social scoring by governments to toys using voice assistance – will be prohibited. High-risk systems, such as those used in critical infrastructure or law enforcement, will face strict requirements, including around risk assessment, data quality, documentation, transparency, human oversight, and accuracy. Systems posing limited risks, like chatbots, must adhere to transparency obligations so users know they are not interacting with humans. Minimal risk systems like games and spam filters can be freely used.

What share of AI systems will fall into the high-risk category?

It is uncertain what share of AI systems will be in the high-risk category, as both the field of AI and the law are still evolving. The European Commission estimated in an impact study that only 5-15% of applications would be subject to stricter rules. A study by appliedAI of 106 enterprise AI systems found that 18% were high-risk, 42% low-risk, and 40% unclear if high or low risk. The system with unclear risk classification in this study were mainly within the areas of critical infrastructure, employment, law enforcement, and product safety. In a related survey of 113 EU AI startups, 33% of the startups surveyed believed that their AI systems would be classified as high-risk, compared to the 5-15% assessed by the European Commission.

 

What are the penalties of the EU AI Act?

Based on the European Parliament’s adopted position, using prohibited AI practices outlined in Article 5 can result in fines of up to €40 million, or 7% of worldwide annual turnover – whichever is higher. Not complying with the data governance requirements (under Article 10) and the requirements for transparency and provision of information (under Articles 13) can lead to fines up to €20 million, or 4% of worldwide turnover if that is higher. Non-compliance with any other requirements or obligations can result in fines of up to €10 million or 2% of worldwide turnover – again, whichever is higher. Fines are tiered depending on the provision violated, with prohibited practices receiving the highest penalties and other violations receiving lower maximum fines.

 

What measures does the EU AI Act implement for SMEs specifically?

The EU AI Act aims to provide support for SMEs and startups as specified in Article 55. Specific measures include granting priority access for SMEs and EU-based startups to regulatory sandboxes if eligibility criteria are met. Additionally, tailored awareness raising and digital skills development activities will be organised to address the needs of smaller organisations. Moreover, dedicated communication channels will be set up to offer guidance and respond to queries from SMEs and startups. Participation of SMEs and other stakeholders in the standards development process will also be encouraged. To reduce the financial burden of compliance, conformity assessment fees will be lowered for SMEs and startups based on factors like development stage, size, and market demand. The Commission will regularly review certification and compliance costs for SMEs/startups (with input from transparent consultations) and work with Member States to reduce these costs where possible.

 

Can I voluntarily comply with the EU AI Act even if my system is not in scope?

Yes! The Commission, the AI Office, and/or Member States will encourage voluntary codes of conduct for requirements under Title III, Chapter 2 (e.g. risk management, data governance, and human oversight) for AI systems not deemed to be high-risk. These codes of conduct will provide technical solutions for how an AI system can meet the requirements, according to the system’s intended purpose. Other objectives like environmental sustainability, accessibility, stakeholder participation, and diversity of development teams will be considered. Small businesses and startups will be taken into account when encouraging codes of conduct. See Article 69 on Codes of Conduct for Voluntary Application of Specific Requirements.