In this overview, we offer a summary of the key elements of the AI Office relevant for those interested in AI governance. We’ve highlighted the responsibilities of the AI Office, its role within the European Commission, its relationship with the AI Board, its national regulators, the scientific panel of independent experts, and its function as the EU voice in global AI cooperation. For further details, please see the latest AI Act legal text, in which the AI Office was created, and the Commission Decision formally establishing it as a legal entity.
(Expired) The AI Office is hiring
The European Commission is recruiting contract agents who are AI technology specialists to govern the most cutting-edge AI models. See here for an overview of the roles and activities of the AI Office.
We are sharing this opportunity because we believe that the AI Office will be a key actor in governing AI technologies in the near future. Bringing high-quality talent to this role is essential for effectively regulating AI technology in the EU and for inspiring similar regulation around the world.
Deadline to apply is 12:00 CET on 27 March 2024
The European Commission has established a new EU level regulator, the European AI Office, which will sit within the Directorate-General for Communication Networks, Content and Technology (DG CNECT) in the European Commission.
The AI Office will monitor, supervise, and enforce the AI Act requirements on general purpose AI (GPAI) models and systems across the 27 EU Member States. This includes analysing emerging unforeseen systemic risks stemming from GPAI development and deployment, as well as developing capabilities evaluations, conducting model evaluations and investigating incidents of potential infringement and non-compliance. To facilitate the compliance of GPAI model providers and consider their perspectives, the AI Office will produce voluntary codes of practice, adherence to which would create a presumption of conformity.
The AI Office will also lead the EU in international cooperation on AI and strengthen bonds between the European Commission and the scientific community, including the forthcoming scientific panel of independent experts. The Office will help the 27 Member States cooperate on enforcement, including on joint investigations, and act as the Secretariat of the AI Board, the intergovernmental forum for coordination between national regulators. It will support the creation of regulatory sandboxes where companies can test AI systems in a controlled environment. It will also provide information and resources to small and medium businesses (SMEs) to aid in their compliance with rules.
Central coordinating and monitoring mechanism
What will the AI Office do to coordinate and monitor the implementation of the EU AI Act?
- The AI Office will monitor the implementation of rules by GPAI model developers, requiring them to take corrective measures when non-compliant.
- Where the same provider develops the GPAI model and system (user-facing application), the AI Office will also supervise compliance of the GPAI system.
- When a GPAI system is used directly by deployers in a high risk context (Annex II or III), and is suspected of being non-compliant, the AI Office will cooperate with market surveillance authorities to assess compliance and inform the AI Board.
- The AI Office will facilitate uniform application of the AI Act across Member States.
- The Office will support coordinated enforcement of banned and high-risk AI by streamlining communication between sectoral bodies and national authorities, and creating central databases, particularly when a GPAI model or system is integrated into a high-risk AI system.
- It will ensure supervisory coordination for AI systems that fall under the AI Act as well as the Digital Services Act (DSA) and Digital Markets Act (DMA).
- It will coordinate the creation of a governance system, including preparing the establishment of advisory bodies at the EU level and monitoring the establishment of relevant national authorities.
- Finally, the Office will act as the Secretariat for the AI Board and its subgroups, providing administrative support to the advisory forum and scientific panel of independent experts, including organising meetings and preparing relevant documents.
AI Board
What is the AI Board, and what is it’s role in relation to the AI Office?
- Composed of one representative per Member State, with the EDPS and AI Office participating in meetings as non-voting observers, the Board should ensure consistency and coordination of implementation between national competent authorities.
- The AI Office will provide the Secretariat for the Board, convene meetings upon request of the Chair, and prepare the agenda.
- The AI Board will assist the AI Office in supporting national competent authorities in the establishment and development of regulatory sandboxes and facilitate cooperation and information sharing among regulatory sandboxes.
Joint investigations
How will the AI Office play a role in ‘joint investigations’?
- The AI Office will provide coordination support for joint investigations conducted by one or more market surveillance authorities to identify non-compliance, where high risk AI systems are found to present a serious risk across several Member States.
Enforcement
The AI Office will support the Commission’s role as lead AI Act enforcer by preparing:
- Decisions (legal acts that apply only to specific recipients, either generally or individually);
- Implementing acts to provide detailed rules for the application of the AI Act;
- Delegated acts to supplement or amend non-essential elements of the AI Act;
- Guidance and guidelines to support practical AI Act implementation, such as standardised protocols and best practices, in consultation with relevant Commission services, and EU institutions, bodies and agencies;
- Standardisation requests: the evaluation of existing standards and the preparation of common specifications to support AI Act implementation.
Codes of practice
How will the AI Office implement codes of practice for AI in the EU?
- By Q2 2025, the AI Office and AI Board must facilitate the development of codes of practice to cover GPAI obligations and watermarking techniques for labelling content as artificially generated, while accounting for international approaches. Codes must have specific, measurable objectives, including key performance indicators (KPIs), accounting for the differences in size and capacity of various providers.
- The AI Office must monitor and evaluate the implementation and effectiveness of codes, which involves reports from the GPAI providers, while considering adaptations in light of emerging standards. This includes setting up forums for GPAI model and system providers to exchange best practices. It may involve consultation with relevant national competent authorities, civil society, industry, academia, downstream providers, and independent experts, who should be regularly consulted.
- All GPAI model providers may demonstrate compliance with their obligations if they voluntarily adhere to the codes of practice until European harmonised standards are published, compliance with which will also lead to a presumption of conformity.
- The AI Office will create a forum for cooperation with the open-source community to identify and develop best practices for the safe development and use of open-source AI models and systems.
- The Commission may give general EU validity to a code of practice through implementing acts. If a code of practice cannot be finalised when rules are legally applicable, or the AI Office judges it as inadequate, the Commission may provide common rules for implementation through implementing acts.
Model evaluations: Capabilities and risks
To assess the capabilities and risks of AI models, the AI Office will:
- Develop tools, methodologies, and benchmarks for capabilities evaluations for GPAI models, particularly the highly capable ones with the greatest impact that present possible systemic risks.
- Monitor emerging unforeseen risks of GPAI models, including responding to the scientific panel of independent experts’ qualified alerts. Systemic GPAI model providers are also required to report serious incidents and corrective measures to the AI Office.
- Investigate possible infringements and systemic risks of upstream GPAI models and systems, including collecting complaints from downstream providers and deployers, as well as qualified alerts from the scientific panel of independent experts.
- Require GPAI model providers, where necessary, to provide technical documentation on training and testing processes and evaluation results, any other additional information needed to assess compliance or for the scientific panel of independent experts to carry out their work.
- Initiate structured dialogues with the GPAI model provider, where helpful, to gather more information on the model’s internal testing, safeguards for preventing systemic risks, and other procedures and measures the provider has taken to mitigate such risks.
- Conduct model evaluations, after consulting the AI Board, through APIs or other means, such as the source code, where documentation and information are insufficient to assess compliance.
Where these model evaluations have identified a serious and substantiated concern of systemic risk, the AI Office can require the provider to implement mitigation measures, which may be made binding through a Commission Decision. If such mitigation measures are unable to address the risk, the AI Office can restrict, recall, or withdraw the GPAI model from the market.
Data governance
The AI Office will also develop a template for GPAI providers to use when publishing a sufficiently detailed summary about the content used for training the GPAI model, which should be broadly comprehensive in its scope, rather than technically detailed, to facilitate parties with legitimate interests in exercising their rights. The training data content summaries may list the main data collections or sets used, such as large private or public databases or data archives, while providing a narrative explanation about other data sources used.
Scientific panel of independent experts
The Commission will select experts that demonstrate expertise, independence from AI developers, and the ability to execute their duties diligently, accurately, and objectively.
The panel will advise and support the AI Office on:
- Implementation and enforcement of GPAI models and systems.
- Development of tools, methodologies, and benchmarks for evaluating GPAI capabilities.
- Classification of different GPAI models and systems, including systemic GPAI models.
- Development of tools and templates.
The panel can provide a qualified alert to the AI Office where it suspects a GPAI model poses concrete identifiable risk at the EU level, or meets the threshold for classifying GPAI models as systemically risky.On that basis, the AI Office, having informed the AI Board, may designate a GPAI model as systemically risky and therefore subject to more obligations.
Collaboration
With regards to collaboration, the AI Office will also:
- Cooperate with relevant EU institutions, bodies and agencies, including the European High Performance Computing Joint Undertaking (EuroHPC JU), encouraging investment in development of GPAI to be deployed in beneficial applications;
- Engage with Member State authorities on behalf of the Commission;
- Work with relevant DGs and Commission services, notably the European Centre for Algorithmic Transparency for GPAI model evaluation, and raising awareness of emerging risks within the Commission;
- Evaluate and promote the convergence of best practices in public procurement procedures for AI systems;
- Contribute to international cooperation on AI governance, advocate for responsible AI, and promote the EU approach.
Sandboxes
The Office will contribute technical support, advice, and tools for the establishment and operation of AI regulatory sandboxes, coordinating with national authorities as needed to encourage cooperation among Member States. Twoyears after entry into force, Member States must establish at least one AI regulatory sandbox either independently or by joining other Member States’ sandbox(es).
This should foster innovation, particularly for SMEs, by facilitating AI training, testing, and validation, before the system’s market placement or introduction into service under regulators’ supervision. Such supervision is intended to provide legal clarity, improve regulatory expertise and policy learning, and enable market access. The AI Office is notified by national authorities of any suspension in sandbox testing due to significant risks. National authorities submit publicly available annual reports to the AI Office and AI Board detailing sandbox progress, incidents, and recommendations.
Supporting SMEs
What will the AI Office to do support SMEs in the EU?
- The AI Office will develop and maintain a single information platform providing easy to use information on this Regulation for all EU operators.
- The AI Office will organise appropriate communication campaigns to raise awareness about the Regulation’s requirements.
Review
How will the AI Office’s roles and responsibilities be reviewed and updated?
- 2 years after entry into force (June 2026), the Commission will assess if the AI Office has been given sufficient powers and competences to fulfil its tasks, and whether those need to be upgraded with increased resources.
(Expired) The AI Office is hiring
The European Commission is recruiting contract agents who are AI technology specialists to govern the most cutting-edge AI models. See here for an overview of the roles and activities of the AI Office.
We are sharing this opportunity because we believe that the AI Office will be a key actor in governing AI technologies in the near future. Bringing high-quality talent to this role is essential for effectively regulating AI technology in the EU and for inspiring similar regulation around the world.
Deadline to apply is 12:00 CET on 27 March 2024