Esta herramienta también está disponible en 🇬🇧 inglés.
El Reglamento de Inteligencia Artificial de la UE podrÃa introducir pronto nuevas obligaciones. Use la herramienta interactiva que figura a continuación para determinar si su empresa estará sujeta a estos requisitos.
Nota: El Reglamento de Inteligencia Artificial de la UE aún está en proceso de negociación. Por tanto, esta herramienta es solo la mejor estimación de que disponemos sobre las obligaciones legales a las que podrÃa estar sujeto en virtud de la legislación definitiva. Continuaremos actualizando la herramienta para reflejar cualquier cambio en el Reglamento propuesto.
Comentarios: Estamos trabajando para mejorar la herramienta. EnvÃe sus comentarios a Risto Uuk, a la dirección risto@futureoflife.org.
Texto de la Ley de Inteligencia Artificial de la UE
Puede ver el texto completo de la propuesta de Reglamento de Inteligencia Artificial de la UE en este sitio web siguiendo los enlaces que figuran a continuación:
Preguntas frecuentes
¿Cuándo entra en vigor el Reglamento de Inteligencia Artificial de la UE?
La Comisión Europea ahora está apoyando al Consejo de la Unión Europea y al Parlamento Europeo en la conclusión de las negociaciones interinstitucionales (diálogo tripartito), última fase de las negociaciones antes de que se apruebe el Reglamento de Inteligencia Artificial de la UE. Se espera que se completen a finales de 2023 o principios de 2024. Una vez se apruebe oficialmente el Reglamento, habrá un perÃodo de aplicación de dos a tres años, dependiendo de cómo se desarrollen las negociaciones entre las instituciones de la UE. O sea, es probable que el Reglamento de Inteligencia Artificial se aplique en 2026 o más adelante. Además, durante el perÃodo de aplicación, se espera que los organismos europeos de normalización redacten normas para el Reglamento de Inteligencia Artificial.
¿Cuáles son las categorÃas de riesgo definidas por el Reglamento de Inteligencia Artificial de la UE?
El marco regulatorio del Reglamento define cuatro niveles de riesgo para los sistemas de IA: inaceptable, alto, limitado y mÃnimo o nulo. Se prohibirán los sistemas que planteen riesgos inaceptables, como amenazar la seguridad, los medios de subsistencia y los derechos de las personas, desde la evaluación de los ciudadanos por parte de los Gobiernos hasta los juguetes que utilizan asistente de voz. Los sistemas de alto riesgo, como los utilizados en infraestructuras fundamentales o en la aplicación de las leyes, se someterán a requisitos estrictos, también en torno a la evaluación de riesgos, la calidad de los datos, la documentación, la transparencia, la vigilancia humana y la exactitud. Los sistemas que presentan riesgos limitados, como los chatbots, deben cumplir con las obligaciones de transparencia para que los usuarios sepan que no interactúan con seres humanos. Los sistemas de riesgo mÃnimo, como los juegos y los filtros de spam, se pueden utilizar libremente.

Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
¿Qué porcentaje de sistemas de IA se incluirá en la categorÃa de alto riesgo?
No está claro qué porcentaje de sistemas de IA se incluirá en la categorÃa de alto riesgo, ya que tanto el campo de la IA como el propio Reglamento aún están en evolución. En un estudio de impacto, la Comisión Europea estimó que solo el 5-15 % de las solicitudes estarÃan sujetas a normas más estrictas. Un estudio llevado a cabo por appliedAI en 106 sistemas de IA empresariales observó que el 18 % era de alto riesgo; el 42 %, de bajo riesgo; y el 40 % no estaba claro si era de alto o bajo riesgo. Los sistemas con una clasificación de riesgos poco clara en este estudio se encontraban principalmente dentro de las áreas de infraestructuras fundamentales, empleo, aplicación de las leyes y seguridad de los productos. En una encuesta relacionada a 113 nuevas empresas de IA de la UE, el 33 % de las encuestadas creÃa que sus sistemas de IA recibirÃan una clasificación de alto riesgo, en comparación con el 5-15 % evaluado por la Comisión Europea.
¿Qué sanciones prevé el Reglamento de Inteligencia Artificial de la UE?
Según la postura adoptada por el Parlamento Europeo, el uso de las prácticas de IA prohibidas que se describen en el artÃculo 5 puede dar lugar a multas de hasta 40 millones EUR o del 7 % de la facturación anual mundial, lo que sea mayor. Infringir los requisitos de gobernanza de datos (en virtud del artÃculo 10) y los requisitos de transparencia y suministro de información (en virtud del artÃculo 13) puede dar lugar a multas de hasta 20 millones EUR, o el 4 % de la facturación mundial, lo que sea mayor. Incumplir cualquier otro requisito u obligación puede dar lugar a multas de hasta 10 millones EUR o el 2 % de la facturación mundial, de nuevo, lo que sea mayor. Las multas se escalonan según la disposición infringida; las prácticas prohibidas acarrean las multas más altas, y por otras infracciones se imponen multas máximas más bajas.
¿Qué medidas dispone el Reglamento de Inteligencia Artificial de la UE especÃficamente para las pymes?
El Reglamento de Inteligencia Artificial de la UE tiene por objeto prestar apoyo a las pymes y nuevas empresas, tal como se especifica en el artÃculo 55. Las medidas especÃficas incluyen la concesión de acceso prioritario para pymes y nuevas empresas con sede en la UE a los espacios controlados de pruebas reglamentarios si se cumplen los criterios de aptitud. Además, se organizarán actividades personalizadas de sensibilización y desarrollo de habilidades digitales para satisfacer las necesidades de las organizaciones más pequeñas. Además, se establecerán canales de comunicación especÃficos para ofrecer orientación y responder a las consultas de pymes y nuevas empresas. También se fomentará la participación de las pymes y otras partes interesadas en el desarrollo de normas. Para reducir la carga financiera del cumplimiento normativo, se reducirán las tarifas de evaluación del cumplimiento para pymes y nuevas empresas en función de factores como la etapa de desarrollo, el tamaño y la demanda del mercado. La Comisión revisará periódicamente los costes de certificación y cumplimiento para las pymes/nuevas empresas (con contribución de consultas transparentes) y colaborará con los Estados miembros para disminuir estos costes cuando sea posible.
¿Puedo cumplir voluntariamente con el Reglamento de Inteligencia Artificial de la UE incluso si mi sistema no está dentro del alcance?
¡SÃ! La Comisión, la Oficina de IA o los Estados miembros fomentarán códigos de conducta voluntarios sobre los requisitos del tÃtulo III, capÃtulo 2 (por ejemplo, gestión de riesgos, gobernanza de datos y vigilancia humana), para los sistemas de IA que no se consideren de alto riesgo. Estos códigos ofrecerán soluciones técnicas sobre cómo un sistema de IA puede cumplir los requisitos según su propósito previsto. Se considerarán otros objetivos como la sostenibilidad ambiental, la accesibilidad, la involucración de las partes interesadas y la diversidad de los equipos de desarrollo. Se tendrán en cuenta las pequeñas empresas y las nuevas empresas a la hora de fomentar los códigos de conducta. Véase el artÃculo 69 sobre códigos de conducta para la aplicación voluntaria de requisitos especÃficos.
Apéndices
Obligaciones de los proveedores
Basado en el texto del Parlamento Europeo de los artÃculos 16, 20, 21, 22 y 23:
Article 16
Obligations of providers of high-risk AI systems
Providers of high-risk AI systems shall:
(a) ensure that their high-risk AI systems are compliant with the requirements set out in Chapter 2 of this Title;
(b) have a quality management system in place which complies with Article 17;
(c) draw-up the technical documentation of the high-risk AI system;
(d) when under their control, keep the logs automatically generated by their high-risk AI systems;
(e) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to its placing on the market or putting into service;
(f) comply with the registration obligations referred to in Article 51;
(g) take the necessary corrective actions, if the high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title;
(h) inform the national competent authorities of the Member States in which they made the AI system available or put it into service and, where applicable, the notified body of the non-compliance and of any corrective actions taken;
(i) to affix the CE marking to their high-risk AI systems to indicate the conformity with this Regulation in accordance with Article 49;
(j) upon request of a national competent authority, demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title.
Article 20
Automatically generated logs
- Providers of high-risk AI systems shall keep the logs automatically generated by their high-risk AI systems, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law. The logs shall be kept for a period that is appropriate in the light of the intended purpose of high-risk AI system and applicable legal obligations under Union or national law.
- Providers that are credit institutions regulated by Directive 2013/36/EU shall maintain the logs automatically generated by their high-risk AI systems as part of the documentation under Articles 74 of that Directive.
Article 21
Corrective actions
Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it, as appropriate. They shall inform the distributors of the high-risk AI system in question and, where applicable, the authorised representative and importers accordingly.
Article 22
Duty of information
Where the high-risk AI system presents a risk within the meaning of Article 65(1) and that risk is known to the provider of the system, that provider shall immediately inform the national competent authorities of the Member States in which it made the system available and, where applicable, the notified body that issued a certificate for the high-risk AI system, in particular of the non-compliance and of any corrective actions taken.
Article 23
Cooperation with competent authorities
Providers of high-risk AI systems shall, upon request by a national competent authority, provide that authority with all the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the requirements set out in Chapter 2 of this Title, in an official Union language determined by the Member State concerned. Upon a reasoned request from a national competent authority, providers shall also give that authority access to the logs automatically generated by the high-risk AI system, to the extent such logs are under their control by virtue of a contractual arrangement with the user or otherwise by law.
Obligaciones de los implementadores
Basado en el texto del Parlamento Europeo del artÃculo 29:
1. Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use accompanying the systems, pursuant to paragraphs 2 and 5 of this Article.
1a. To the extent deployers exercise control over the high-risk AI system, they shall:
- i) implement human oversight according to the requirements laid down in this Regulation
- (ii) ensure that the natural persons assigned to ensure human oversight of the high-risk AI systems are competent, properly qualified and trained, and have the necessary resources in order to ensure the effective supervision of the AI system in accordance with Article 14
- (iii) ensure that relevant and appropriate robustness and cybersecurity measures are regularly monitored for effectiveness and are regularly adjusted or updated.
2. The obligations in paragraph 1 and 1a, are without prejudice to other deployer obligations under Union or national law and to the deployer’s discretion in organising its own resources and activities for the purpose of implementing the human oversight measures indicated by the provider.
3. Without prejudice to paragraph 1 and 1a, to the extent the deployer exercises control over the input data, that deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
4. Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions of use and when relevant, inform providers in accordance with Article 61. When they have reasons to consider that the use in accordance with the instructions of use may result in the AI system presenting a risk within the meaning of Article 65(1) they shall, without undue delay, inform the provider or distributor and relevant national supervisory authorities and suspend the use of the system. They shall also immediately inform first the provider, and then the importer or distributor and relevant national supervisory authorities when they have identified any serious incident or any malfunctioning within the meaning of Article 62 and interrupt the use of the AI system. If the deployer is not able to reach the provider, Article 62 shall apply mutatis mutandis. For deployers that are credit institutions regulated by Directive 2013/36/EU, the monitoring obligation set out in the first subparagraph shall be deemed to be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to Article 74 of that Directive.
5. Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent that such logs are under their control and are required for ensuring and demonstrating compliance with this Regulation, for ex-post audits of any reasonably foreseeable malfunction, incidents or misuses of the system, or for ensuring and monitoring for the proper functioning of the system throughout its lifecycle. Without prejudice to applicable Union or national law, the logs shall be kept for a period of at least six months. The retention period shall be in accordance with industry standards and appropriate to the intended purpose of the high-risk AI system. Deployers that are credit institutions regulated by Directive 2013/36/EU shall maintain the logs as part of the documentation concerning internal governance arrangements, processes and mechanisms pursuant to Article 74 of that Directive.
- (a) Prior to putting into service or use a high-risk AI system at the workplace, deployers shall consult workers representatives with a view to reaching an agreement in accordance with Directive 2002/14/EC and inform the affected employees that they will be subject to the system.
- (b) Deployers of high-risk AI systems that are public authorities or Union institutions, bodies, offices and agencies or undertakings referred to in Article 51(1a)(b) shall comply with the registration obligations referred to in Article 51.
6. Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, a summary of which shall be published, having regard to the specific use and the specific context in which the AI system is intended to operate. Deployers may revert in part to those data protection impact assessments for fulfilling some of the obligations set out in this article, insofar as the data protection impact assesment fulfill those obligations.
6a. Without prejudice to Article 52, deployers of high-risk AI systems referred to in Annex III, which make decisions or assist in making decisions related to natural persons, shall inform the natural persons that they are subject to the use of the high-risk AI system. This information shall include the intended purpose and the type of decisions it makes. The deployer shall also inform the natural person about its right to an explanation referred to in Article 68c.
6b. Deployers shall cooperate with the relevant national competent authorities on any action those authorities take in relation with the high-risk system in order to implement this Regulation.
Obligaciones de los distribuidores
Basado en el texto del Parlamento Europeo del artÃculo 27:
- Before making a high-risk AI system available on the market, distributors shall verify that the high-risk AI system bears the required CE conformity marking, that it is accompanied by the required documentation and instruction of use, and that the provider and the importer of the system, as applicable, have complied with their obligations set out in this Regulation in Articles 16 and 26 respectively.
- Where a distributor considers or has reason to consider, on the basis of the information in its possession that a high-risk AI system is not in conformity with the requirements set out in Chapter 2 of this Title, it shall not make the high-risk AI system available on the market until that system has been brought into conformity with those requirements. Furthermore, where the system presents a risk within the meaning of Article 65(1), the distributor shall inform the provider or the importer of the system, the relevant national competent authority, as applicable, to that effect.
- Distributors shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise the compliance of the system with the requirements set out in Chapter 2 of this Title.
- A distributor that considers or has reason to consider, on the basis of the information in its possession, that a high-risk AI system which it has made available on the market is not in conformity with the requirements set out in Chapter 2 of this Title shall take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall it or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the distributor shall immediately inform the provider or importer of the system and the national competent authorities of the Member States in which it has made the product available to that effect, giving details, in particular, of the non-compliance and of any corrective actions taken.
- Upon a reasoned request from a national competent authority, distributors of the high-risk AI system shall provide that authority with all the information and documentation in their possession or available to them, in accordance with the obligations of distributors as outlined in paragraph 1, that are necessary to demonstrate the conformity of a high-risk system with the requirements set out in Chapter 2 of this Title.5a. Distributors shall cooperate with national competent authorities on any action those authorities take to reduce and mitigate the risks posed by the high-risk AI system.
Obligaciones de los importadores
Basado en el texto del Parlamento Europeo del artÃculo 26:
- Before placing a high-risk AI system on the market, importers of such system shall ensure that such a system is in conformity with this Regulation by ensuring that:
- (a) the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of that AI system;
- (b) the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV;
- (c) the system bears the required conformity marking and is accompanied by the required documentation and instructions of use.
- (ca) where applicable, the provider has appointed an authorised representative in accordance with Article 25(1).
- Where an importer considers or has reason to consider that a high-risk AI system is not in conformity with this Regulation, or is counterfeit, or accompanied by falsified documentation it shall not place that system on the market until that AI system has been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 65(1), the importer shall inform the provider of the AI system and the market surveillance authorities to that effect.
- Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable.
- Importers shall ensure that, while a high-risk AI system is under their responsibility, where applicable, storage or transport conditions do not jeopardise its compliance with the requirements set out in Chapter 2 of this Title.
- Importers shall provide national competent authorities, upon a reasoned request, with all the necessary information and documentation to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title in a language which can be easily understood by them, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider in accordance with Article 20. 5a. Importers shall cooperate with national competent authorities on any action those authorities take to reduce and mitigate the risks posed by the high-risk AI system.
Obligaciones de los representantes autorizados
Basado en el texto del Parlamento Europeo del artÃculo 25:
- 1. Prior to making their systems available on the Union market, providers established outside the Union shall, by written mandate, appoint an authorised representative which is established in the Union.
- 1a. The authorised representative shall reside or be established in one of the Member States where the activities pursuant to Article 2, paragraphs 1(cb) are taking place.
- 1b. The provider shall provide its authorised representative with the necessary powers and resources to comply with its tasks under this Regulation.
- 2. The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the institution of the Union determined by the national competent authority. For the purpose of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks:
- (a) ensure that the EU declaration of conformity and the technical documentation  have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider;
- (aa) keep at the disposal of the national competent authorities and national authorities referred to in Article 63(7), a copy of the EU declaration of conformity, the technical documentation and, if applicable, the certificate issued by the notified body;
- (b) provide a national competent authority, upon a reasoned request, with all the information and documentation necessary to demonstrate the conformity of a high-risk AI system with the requirements set out in Chapter 2 of this Title, including access to the logs automatically generated by the high-risk AI system to the extent such logs are under the control of the provider;
- (c) cooperate with national supervisory authorities, upon a reasoned request, on any action the authority takes to reduce and mitigate the risks posed by the high-risk AI system;
- (ca) where applicable, comply with the registration obligations referred in Article 51, or, if the registration is carried out by the provider itself, ensure that the information referred to in point 3 of Annex VIII is correct.
- (a) ensure that the EU declaration of conformity and the technical documentation  have been drawn up and that an appropriate conformity assessment procedure has been carried out by the provider;
- 2a. The authorised representative shall be mandated to be addressed, in addition to or instead of the provider, by, in particular, the national supervisory authority or the national competent authorities, on all issues related to ensuring compliance with this Regulation.
- 2b. The authorised representative shall terminate the mandate if it considers or has reason to consider that the provider acts contrary to its obligations under this Regulation. In such a case, it shall also immediately inform the national supervisory authority of the Member State in which it is established, as well as, where applicable, the relevant notified body, about the termination of the mandate and the reasons thereof.
Obligaciones de los proveedores de modelos fundacionales
Le resultará útil este artÃculo de Stanford sobre el cumplimiento de los modelos fundacionales.
Basado en el texto del Parlamento Europeo del artÃculo 28 ter:
- A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licences, as a service, as well as other distribution channels.
- For the purpose of paragraph 1, the provider of a foundation model shall:
- (a) demonstrate through appropriate design, testing and analysis the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development with appropriate methods such as with the involvement of independent experts, as well as the documentation of remaining non-mitigable risks after development
- (b) process and incorporate only datasets that are subject to appropriate data governance measures for foundation models, in particular measures to examine the suitability of the data sources and possible biases and appropriate mitigation;
- (c) design and develop the foundation model in order to achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development;
- (d) design and develop the foundation model, making use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system, whithout prejudice to relevant existing Union and national law. This obligation shall not apply before the standards referred to in Article 40 are published. Foundation models shall be designed with capabilities enabling the measurement and logging of the consumption of energy and resources, and, where technically feasible, other environmental impact the deployment and use of the systems may have over their entire lifecycle;
- (e) draw up extensive technical documentation and intelligible instructions for use, in order to enable the downstream providers to comply with their obligations pursuant to Articles 16 and 28(1);.
- (f) establish a quality management system to ensure and document compliance with this Article, with the possibility to experiment in fulfilling this requirement
- (g) register that foundation model in the EU database referred to in Article 60, in accordance with the instructions outlined in Annex VIII point C.
- When fulfilling those requirements, the generally acknowledged state of the art shall be taken into account, including as reflected in relevant harmonised standards or common specifications, as well as the latest assessment and measurement methods, reflected in particular in benchmarking guidance and capabilities referred to in Article 58a;
- Providers of foundation models shall, for a period ending 10 years after their foundation models have been placed on the market or put into service, keep the technical documentation referred to in paragraph 2(e) at the disposal of the national competent authorities
- Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”) and providers who specialise a foundation model into a generative AI system, shall in addition
- a) comply with the transparency obligations outlined in Article 52 (1),
- b) train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law in line with the generally-acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression, c) without prejudice to Union or national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.
Sistemas prohibidos
Basado en el texto del Parlamento Europeo del artÃculo 5:
- Subliminal techniques with the objective to or effect of materially distorting a person or group of persons’ behaviour causing them significant harm;
- Exploits the vulnerabilities of a person or a specific group of persons, with the objective to or effect of materially distorting the person s or persons behaviour in a manner causing or likely to cause that person or another person significant harm;
- Biometric categorisation systems that categorise natural persons according to actual or inferred sensitive or protected attributes or characteristics;
- Social scoring, evaluation or classification of natural persons or groups based on their social behaviour or personality characteristics leading to detrimental or unfavourable treatment;
- Risk assessing a natural person s or group s risk of (re-)offending, profiling to predict (re-)occurrence of actual or potential criminal or administrative offence;
- Creating or expanding facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
- Inferring emotions of a natural person in the areas of law enforcement, border management, or in workplace and education institutions;
- Analysing recorded footage of public spaces through post remote biometric identification unless subject to a pre-judicial authorisation and strictly necessary for the targeted search connected to a specific serious criminal offence.
Sistemas de alto riesgo
Anexo III
Basado en el texto del Parlamento Europeo del anexo III:
- Biometric identification and categorisation of natural persons:
- (a) AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons;
- Management and operation of critical infrastructure:
- (a) AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity.
- Education and vocational training:
- (a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
- (b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.
- Employment, workers management and access to self-employment:
- (a) AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests;
- (b) AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behavior of persons in such relationships.
- Access to and enjoyment of essential private services and public services and benefits:
- (a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services;
- (b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small scale providers for their own use;
- (c) AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid.
- Law enforcement:
- (a) AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences;
- (b) AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
- (c) AI systems intended to be used by law enforcement authorities to detect deep fakes as referred to in article 52(3);
- (d) AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences;
- (e) AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups;
- (f) AI systems intended to be used by law enforcement authorities for profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of detection, investigation or prosecution of criminal offences;
- (g) AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data.
- Migration, asylum and border control management:
- (a) AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person;
- (b) AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a Member State;
- (c) AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features;
- (d) AI systems intended to assist competent public authorities for the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status.
- Administration of justice and democratic processes:
- (a) AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.
Aspectos técnicos de los sistemas de alto riesgo
Basado en el texto del Parlamento Europeo del artÃculo 6:
- AI systems falling under one or more of the critical areas and use cases referred to in Annex III shall be considered high-risk if they pose a significant risk of harm to the health, safety or fundamental rights of natural persons. Where an AI system falls under Annex III point 2, it shall be considered to be high-risk if it poses a significant risk of harm to the environment.
- AI systems referred to in Annex III shall be considered high-risk unless the output of the system is purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights. In order to ensure uniform conditions for the implementation of this Regulation, the Commission shall, no later than one year after the entry into force of this Regulation, adopt implementing acts to specify the circumstances where the output of AI systems referred to in Annex III would be purely accessory in respect of the relevant action or decision to be taken. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 74, paragraph 2.
Excepción de alto riesgo
En resumen, el artÃculo 84 establece que la Comisión evaluará una vez al año la necesidad de modificar las listas del anexo III (usos de alto riesgo), el artÃculo 5 (prácticas prohibidas) y el artÃculo 52 (transparencia) tras consultar a la Oficina de IA, e informará de las conclusiones al Parlamento y al Consejo. Cada dos años a partir de la aplicación del Reglamento, la Comisión y la Oficina de IA presentarán un informe de evaluación del Reglamento al Parlamento y al Consejo.
Basado en el texto del Parlamento Europeo del artÃculo 84:
- After consulting the AI Office, the Commission shall assess the need for amendment of the list in Annex III, including the extension of existing area headings or addition of new area headings in that Annex, the list of prohibited AI practices in Article 5, and the list of AI systems requiring additional transparency measures in Article 52 once a year following the entry into force of this Regulation and following a recommendation of the Office. The Commission shall submit the findings of that assessment to the European Parliament and the Council.
- By two years after the date of application of this Regulation referred to in Article 85(2)] and every two years thereafter, the Commission, together with the AI office, shall submit a report on the evaluation and review of this Regulation to the European Parliament and to the Council. The reports shall be made public.
- The reports referred to in paragraph 2 shall devote specific attention to the following:
- (a) the status of the financial, technical and human resources of the national competent authorities in order to effectively perform the tasks assigned to them under this Regulation;
- (b) the state of penalties, and notably administrative fines as referred to in Article 71(1), applied by Member States to infringements of the provisions of this Regulation:
- (ba) the level of the development of harmonised standards and common specifications for Artificial Intelligence;
- (bb) the levels of investments in research, development and application of AI systems throughout the Union;
- (bc) the competitiveness of the aggregated European AI sector compared to AI sectors in third countries;
- (bd) the impact of the Regulation with regards to the resource and energy use, as well as waste production and other environmental impact;
- (be) the implementation of the coordinated plan on AI, taking into account the different level of progress among Member States and identifying existing barriers to innovation in AI;
- (bf) the update of the specific requirements regarding the sustainability of AI systems and foundation models, building on the reporting and documentation requirement in Annex IV and in Article 28b;
- (bg) the legal regime governing foundation models;
- (bh) the list of unfair contractual terms within Article 28a taking into account new business practices if necessary;
- By two years after the date of entry into application of this Regulation referred to in Article 85(2) the Commission shall evaluate the functioning of the AI office, whether the office has been given sufficient powers and competences to fulfil its tasks and whether it would be relevant and needed for the proper implementation and enforcement of this Regulation to upgrade the Office and its enforcement competences and to increase its resources. The Commission shall submit this evaluation report to the European Parliament and to the Council.
Obligaciones de transparencia
Basado en el texto del Parlamento Europeo del artÃculo 52:
- Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that the AI system, the provider itself or the user informs the natural person exposed to an AI system that they are interacting with an AI system in a timely, clear and intelligible manner, unless this is obvious from the circumstances and the context of use. Where appropriate and relevant, this information shall also include which functions are AI enabled, if there is human oversight, and who is responsible for the decisionmaking process, as well as the existing rights and processes that, according to Union and national law, allow natural persons or their representatives to object against the application of such systems to them and to seek judicial redress against decisions taken by or harm caused by AI systems, including their right to seek an explanation. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
- Users of an emotion recognition system or a biometric categorisation system which is not prohibited pursuant to Article 5 shall inform in a timely, clear and intelligible manner of the operation of the system the natural persons exposed to and obtain their consent prior to the processing of their biometric and other personal data in accordance with Regulation (EU) 2016/679, Regulation (EU) 2016/1725 and Directive (EU) 2016/280, as applicable. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
- Users of an AI system that generates or manipulates text, audio or visual content that would falsely appear to be authentic or truthful and which features depictions of people appearing to say or do things they did not say or do, without their consent (‘deep fake’), shall disclose in an appropriate, timely, clear and visible manner that the content has been artificially generated or manipulated, as well as, whenever possible, the name of the natural or legal person that generated or manipulated it. Disclosure shall mean labelling the content in a way that informs that the content is inauthentic and that is clearly visible for the recipient of that content. To label the content, users shall take into account the generally acknowledged state of the art and relevant harmonised standards and specifications. Paragraph 3 shall not apply where of an AI system that generates or manipulates text, audio or visual content is authorized by law or if it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties. Where the content forms part of an evidently creative, satirical, artistic or fictional cinematographic, video games visuals and analogous work or programme, transparency obligations set out in paragraph 3 are limited to disclosing of the existence of such generated or manipulated content in an appropriate clear and visible manner that does not hamper the display of the work and disclosing the applicable copyrights, where relevant. It shall also not prevent law enforcement authorities from using AI systems intended to detect deep fakes and prevent, investigate and prosecute criminal offences linked with their use. 3b The information referred to in paragraphs 1 to 3 shall be provided to the natural persons at the latest at the time of the first interaction or exposure. It shall be accessible to vulnerable persons, such as persons with disabilities or children, complete, where relevant and appropriate, with intervention or flagging procedures for the exposed natural person taking into account the generally acknowledged state of the art and relevant harmonised standards and common specifications.
- Paragraphs 1, 2 and 3 shall not affect the requirements and obligations set out in Title III of this Regulation.