EU AI Act Compliance Checker

The EU AI Act might soon place restrictions on providers and deployers of particular kinds of AI systems. Use our interactive tool below to identify whether or not your system will be subject to certain requirements under the EU AI Act.

Please note: This tool can only provide an indication of the legal obligations that your system may be subject to, given the EU AI Act is still under negotiation. The below guidance is based on our best estimate of the compromise that may be reached by the negotiators. We will continue to update the tool to reflect any changes to the Act.

Feedback – We are working to improve this tool. Please provide feedback to Risto Uuk at risto@futureoflife.org


The EU AI Act text

You can view the draft EU AI Act text in full on this website at the links below:


Frequently Asked Questions

When does the EU AI Act come into effect?

The European Commission now supports the Council of the European Union and the European Parliament in concluding inter-institutional negotiations (trilogue) – the last phase of negotiations before the EU AI Act is passed. This is expected to be finished by the end of 2023 or early 2024. Once the law is officially passed, there will be an implementation period of two to three years depending on how the negotiations between the EU institutions unfold. This means that the AI Act will likely be enforced in 2026 or later. In addition, during the implementation period, the European standards bodies are expected to develop standards for the AI Act.

What are the categories of risk defined by the EU AI Act?

The Act’s regulatory framework defines four levels of risk for AI systems: unacceptable, high, limited, and minimal or no risk. Each level merits a different response. Unacceptable risk systems that threaten safety, livelihood and rights – from social scoring by governments to toys using voice assistance – will be prohibited. High risk systems, such as those used in critical infrastructure or law enforcement, will face strict requirements around risk assessment, data quality, documentation, transparency, human oversight, accuracy, etc. Limited risk systems like chatbots require transparency so users know they are not humans. Minimal risk systems like games and spam filters can be freely used. Most AI systems currently in use within the EU fall into the minimal or no risk category.

What share of AI systems will fall into the high-risk category?

It is uncertain what share of AI systems will be in the high-risk category, as both the field of AI and the law are still evolving. The European Commission estimated in an impact study that only 5-15% of applications would be subject to stricter rules. A study by appliedAI of 106 enterprise AI systems found that 18% were high-risk, 42% low-risk, and 40% unclear if high or low risk. The system with unclear risk classification in this study were mainly within the areas of critical infrastructure, employment, law enforcement, and product safety. In a related survey of 113 EU AI startups, 33% of the startups surveyed believed that their AI systems would be classified as high-risk, compared to the 5-15% assessed by the European Commission.

What are the penalties of the EU AI Act?

Based on the European Parliament’s adopted position, using prohibited AI practices outlined in Article 5 can result in fines of up to €40 million, or 7% of worldwide annual turnover – whichever is higher. Not complying with the data governance requirements (under Article 10) and the requirements for transparency and provision of information (under Articles 13) can lead to fines up to €20 million, or 4% of worldwide turnover if that is higher. Non-compliance with any other requirements or obligations can result in fines of up to €10 million or 2% of worldwide turnover – again, whichever is higher. Fines are tiered depending on the provision violated, with prohibited practices receiving the highest penalties and other violations receiving lower maximum fines.

What measures does the EU AI Act implement for SMEs specifically?

The EU AI Act aims to provide support for SMEs and startups as specified in Article 55. Specific measures include granting priority access for SMEs and EU-based startups to regulatory sandboxes if eligibility criteria are met. Additionally, tailored awareness raising and digital skills development activities will be organised to address the needs of smaller organisations. Moreover, dedicated communication channels will be set up to offer guidance and respond to queries from SMEs and startups. Participation of SMEs and other stakeholders in the standards development process will also be encouraged. To reduce financial burden, conformity assessment fees will be lowered for SMEs and startups based on factors like development stage, size, and market demand. The Commission will regularly review certification and compliance costs for SMEs/startups with input from transparent consultations, and work with Member States to reduce these costs where possible.

Can I voluntarily comply with the EU AI Act even if my system is not in scope?

Yes! See Article 69 on Codes of conduct in the EU AI Act. The Commission, the AI Office, and/or Member States will encourage voluntary codes of conduct for requirements in Title III, Chapter 2 (e.g. risk management, data governance, human oversight) for AI systems not deemed to be high-risk. The codes of conduct will provide technical solutions for how an AI system can meet the requirements, appropriate to the system’s intended purpose. Other objectives like environmental sustainability, accessibility, stakeholder participation, and diversity of development teams will be considered. Small businesses and startups will be taken into account when encouraging codes of conduct.


Appendixes

Obligations of Importers

Based on the European Parliament’s text from Article 26:

  1. Before placing a high-risk AI system on the market, importers of such system shall ensure that such a system is in conformity with this Regulation by ensuring that:
    • (a) the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of that AI system;
    • (b) the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV;
    • (c) the system bears the required conformity marking and is accompanied by the required documentation and instructions of use.
      • (ca) where applicable, the provider has appointed an authorised representative in accordance with Article 25(1).
  2. Where an importer considers or has reason to consider that a high-risk AI system is not in conformity with this Regulation, or is counterfeit, or accompanied by falsified documentation it shall not place that system on the market until that AI system has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the importer shall inform the provider of the AI system and the market surveillance authorities to that effect.
  3. Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable.
  4. Importers shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise its compliance with the requirements set out in Chapter 2 of this Title.
  5. Importers shall provide national competent authorities, upon a reasoned request, with all the necessary information and documentation to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title in a language which can be easily understood by them, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider in accordance with Article 20. 5a. Importers shall cooperate with national competent authorities on any action those authorities take to reduce and mitigate the risks posed by the high-risk AI system.

Obligations of Distributers

Based on the European Parliament’s text from Article 27:

  1. Before making a high-risk AI system available on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking, that it is accompanied by the required documentation and instruction of use, and that the provider and the importer of the system, as applicable, have complied with their obligations set out in this Regulation in Articles 16 and 26 respectively.
  2. Where a distributor considers or has reason to consider, on the basis of the information in its possession that a high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 65(1), the distributor shall inform the provider or the importer of the system, the relevant national competent authority, as applicable, to that effect.
  3. Distributors shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise the compliance of the system with the requirements set out in Chapter 2 of this Title.
  4. A distributor that considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the provider or importer of the system and the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
  5. Upon a reasoned request from a national competent authority, distributors of the high-risk AI system shall provide that authority with all the information and documentation in their possession or available to them, in accordance with the obligations of distributors as outlined in paragraph 1, that are necessary to demonstrate the conformity of a high-risk system with the requirements set out in Chapter 2 of this Title.5a. Distributors shall cooperate with national competent authorities on any action those authorities take to reduce and mitigate the risks posed by the high-risk AI system.

Obligations of Authorised Representatives

Based on the European Parliament’s text from Article 25:

  • 1. Prior to making their systems available on the Union market, providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union.
    • 1a. The authorised representative shall reside or be established in one of the Member States where the activities pursuant to Article 2, paragraphs 1(cb) are taking place.
    • 1b. The provider shall provide its authorised representative with the necessary powers and resources to comply with its tasks under this Regulation.
  • 2. The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institution of the Union determined by the national competent authority. For the purpose of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks:
    • (a) ensure that the EU declaration of conformity and the technical documentation  have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider;
      • (aa) keep at the disposal of the national competent authorities and national authorities referred to in Article 63(7), a copy of the EU declaration of conformity, the technical documentation and, if applicable, the certificate issued by the notified body;
    • (b) provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider;
    • (c) cooperate with national supervisory authorities, upon a reasoned request, on any action the authority takes to reduce and mitigate the risks posed by the high-risk AI system;
      • (ca) where applicable, comply with the registration obligations referred in Article 51, or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Annex VIII is correct.
  • 2a. The authorised representative shall be mandated to be addressed, in addition to or instead of the provider, by, in particular, the national supervisory authority or the national competent authorities, on all issues related to ensuring compliance with this Regulation.
  • 2b. The authorised representative shall terminate the mandate if it considers or has reason to consider that the provider acts contrary to its obligations under this Regulation. In such a case, it shall also immediately inform the national supervisory authority of the Member State in which it is established, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons thereof.

Obligations of Foundation Model Providers

You may find this article from Stanford on compliance for Foundation models useful.

Based on the European Parliament’s text from Article 28b:

  1. A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licences, as a service, as well as other distribution channels.
  2. For the purpose of paragraph 1, the provider of a foundation model shall:
    • (a) demonstrate through appropriate design, testing and analysis the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development with appropriate methods such as with the involvement of independent experts, as well as the documentation of remaining non-mitigable risks after development
    • (b) process and incorporate only datasets that are subject to appropriate data governance measures for foundation models, in particular measures to examine the suitability of the data sources and possible biases and appropriate mitigation;
    • (c) design and develop the foundation model in order to achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development;
    • (d) design and develop the foundation model, making use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system, whithout prejudice to relevant existing Union and national law. This obligation shall not apply before the standards referred to in Article 40 are published. Foundation models shall be designed with capabilities enabling the measurement and logging of the consumption of energy and resources, and, where technically feasible, other environmental impact the deployment and use of the systems may have over their entire lifecycle;
    • (e) draw up extensive technical documentation and intelligible instructions for use, in order to enable the downstream providers to comply with their obligations pursuant to Articles 16 and 28(1);.
    • (f) establish a quality management system to ensure and document compliance with this Article, with the possibility to experiment in fulfilling this requirement
    • (g) register that foundation model in the EU database referred to in Article 60, in accordance with the instructions outlined in Annex VIII point C.
    • When fulfilling those requirements, the generally acknowledged state of the art shall be taken into account, including as reflected in relevant harmonised standards or common specifications, as well as the latest assessment and measurement methods, reflected in particular in benchmarking guidance and capabilities referred to in Article 58a;
  3. Providers of foundation models shall, for a period ending 10 years after their foundation models have been placed on the market or put into service, keep the technical documentation referred to in paragraph 2(e) at the disposal of the national competent authorities
  4. Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”) and providers who specialise a foundation model into a generative AI system, shall in addition
    • a) comply with the transparency obligations outlined in Article 52 (1),
    • b) train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law in line with the generally-acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression, c) without prejudice to Union or national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.

Prohibited Systems

Article 5

  • Subliminal techniques with the objective to or effect of materially distorting a person or group of persons’ behaviour causing them significant harm;
  • Exploits the vulnerabilities of a person or a specific group of persons, with the objective to or effect of materially distorting the person s or persons behaviour in a manner causing or likely to cause that person or another person significant harm;
  • Biometric categorisation systems that categorise natural persons according to actual or inferred sensitive or protected attributes or characteristics;
  • Social scoring, evaluation or classification of natural persons or groups based on their social behaviour or personality characteristics leading to detrimental or unfavourable treatment;
  • Risk assessing a natural person s or group s risk of (re-)offending, profiling to predict (re-)occurrence of actual or potential criminal or administrative offence;
  • Creating or expanding facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
  • Inferring emotions of a natural person in the areas of law enforcement, border management, or in workplace and education institutions;
  • Analysing recorded footage of public spaces through post remote biometric identification unless subject to a pre-judicial authorisation and strictly necessary for the targeted search connected to a specific serious criminal offence.

High-risk systems

Annex III

  1. Biometric identification and categorisation of natural persons:
    • (a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;
  2. Management and operation of critical infrastructure:
    • (a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.
  3. Education and vocational training:
    • (a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
    • (b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.
  4. Employment, workers management and access to self-employment:
    • (a) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
    • (b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
  5. Access to and enjoyment of essential private services and public services and benefits:
    • (a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
    • (b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
    • (c) AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
  6. Law enforcement:
    • (a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
    • (b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
    • (c) AI systems intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3);
    • (d) AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
    • (e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
    • (f) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
    • (g) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
  7. Migration, asylum and border control management:
    • (a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
    • (b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
    • (c) AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
    • (d) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
  8. Administration of justice and democratic processes:
    • (a) AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.

Technicalities on high-risk systems

  • AI systems referred to in Annex III shall be considered high-risk.
  • AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Where an AI system falls under Annex III point 2, it shall be considered to be high-risk if it poses a significant risk of harm to the environment.
  • AI systems referred to in Annex III shall be considered high-risk unless the output of the system is purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights. In order to ensure uniform conditions for the implementation of this Regulation, the Commission shall, no later than one year after the entry into force of this Regulation, adopt implementing acts to specify the circumstances where the output of AI systems referred to in Annex III would be purely accessory in respect of the relevant action or decision to be taken. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74, paragraph 2.

Safety Component obligations

In summary: Article 84 states that the Commission will assess the need to amend the lists in Annex III (high-risk uses), Article 5 (prohibited practices), and Article 52  (transparency) annually after consulting the AI Office, and report findings to the Parliament and Council. Every 2 years from the regulation’s application, the Commission and AI Office will submit a report evaluating the regulation to the Parliament and Council.

Based on the European Parliament’s text from Article 84:

  1. After consulting the AI Office, the Commission shall assess the need for amendment of the list in Annex III, including the extension of existing area headings or addition of new area headings in that Annex, the list of prohibited AI practices in Article 5, and the list of AI systems requiring additional transparency measures in Article 52 once a year following the entry into force of this Regulation and following a recommendation of the Office. The Commission shall submit the findings of that assessment to the European Parliament and the Council.
  2. By two years after the date of application of this Regulation referred to in Article 85(2)] and every two years thereafter, the Commission, together with the AI office, shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public.
  3. The reports referred to in paragraph 2 shall devote specific attention to the following:
    • (a) the status of the financial, technical and human resources of the national competent authorities in order to effectively perform the tasks assigned to them under this Regulation;
    • (b) the state of penalties, and notably administrative fines as referred to in Article 71(1), applied by Member States to infringements of the provisions of this Regulation:
      • (ba) the level of the development of harmonised standards and common specifications for Artificial Intelligence;
      • (bb) the levels of investments in research, development and application of AI systems throughout the Union;
      • (bc) the competitiveness of the aggregated European AI sector compared to AI sectors in third countries;
      • (bd) the impact of the Regulation with regards to the resource and energy use, as well as waste production and other environmental impact;
      • (be) the implementation of the coordinated plan on AI, taking into account the different level of progress among Member States and identifying existing barriers to innovation in AI;
      • (bf) the update of the specific requirements regarding the sustainability of AI systems and foundation models, building on the reporting and documentation requirement in Annex IV and in Article 28b;
      • (bg) the legal regime governing foundation models;
      • (bh) the list of unfair contractual terms within Article 28a taking into account new business practices if necessary;
  4. By two years after the date of entry into application of this Regulation referred to in Article 85(2) the Commission shall evaluate the functioning of the AI office, whether the office has been given sufficient powers and competences to fulfil its tasks and whether it would be relevant and needed for the proper implementation and enforcement of this Regulation to upgrade the Office and its enforcement competences and to increase its resources. The Commission shall submit this evaluation report to the European Parliament and to the Council.

Transparency obligations

Based on the European Parliament’s text from Article 52:

  1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI  system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. Where appropriate and relevant, this information shall also include which functions are AI enabled, if there is human oversight, and who is responsible for the decisionmaking process, as well as the existing rights and processes that, according to Union and national law, allow natural persons or their representatives to object against the application of such systems to them and to seek judicial redress against decisions taken by or harm caused by AI systems, including their right to seek an explanation. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
  2. Users of an emotion recognition system or a biometric categorisation system which is not prohibited pursuant to Article 5 shall inform in a timely, clear and intelligible manner of the operation of the system the natural persons exposed to and obtain their consent prior to the processing of their biometric and other personal data in accordance with Regulation (EU) 2016/679, Regulation (EU) 2016/1725 and Directive (EU) 2016/280, as applicable. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
  3. Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications. Paragraph 3 shall not apply where of an AI system that generates or manipulates text, audio or visual content is authorized by law or if it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties. Where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video games visuals and analogous work or programme, transparency obligations set out in paragraph 3 are limited to disclosing of the existence of such generated or manipulated content in an appropriate clear and visible manner that does not hamper the display of the work and disclosing the applicable copyrights, where relevant. It shall also not prevent law enforcement authorities from using AI systems intended to detect deep fakes and prevent, investigate and prosecute criminal offences linked with their use. 3b The information referred to in paragraphs 1 to 3 shall be provided to the natural persons at the latest at the time of the first interaction or exposure. It shall be accessible to vulnerable persons, such as persons with disabilities or children, complete, where relevant and appropriate, with intervention or flagging procedures for the exposed natural person taking into account the generally acknowledged state of the art and relevant harmonised standards and common specifications.
  4. Paragraphs 1, 2 and 3 shall not affect the requirements and obligations set out in Title III of this Regulation.