Why work at the EU AI Office? It’s probably not for everyone, but there are a lot of great reasons to consider.
Summary
- Spearhead responsible AI governance globally by enforcing the world’s first comprehensive binding AI regulation. Your work will directly influence how AI governance and oversight evolves worldwide.
- Leverage the AI Office’s first-mover advantage, as the first regulator of its kind overseeing a large and affluent consumer market, to shape global AI standards on model evaluations.
- Promote AI safety across 27 EU nations and beyond by researching, analysing, and flagging systemic risks.
- Collaborate with international partners through AI safety institutes, and represent the EU’s AI position on the global stage.
- Unlike the AI safety institutes or other AI ethics boards, the AI Office has actual enforcement powers to compel model providers to take corrective actions or recall non-compliant general-purpose AI models.
- Work with specialists in a multidisciplinary environment, including tech, law, ethics and more, both within the Office and with the scientific and open source communities externally. This allows you to tap into the latest AI research, while pioneering frontier research on risk assessments and mitigations, evaluations, incident reporting and cybersecurity.
- Make a high public service impact, where you can contribute to policies that directly affect millions of lives.
- As a new organisation, with considerable growth plans over 2024-25, now is a good time to get involved. There will be ample opportunities to develop your career and take on leadership roles on AI global governance.
The AI Office’s tasks, powers and competences in more detail
Overview
- The European Commission has established the EU AI Office within Directorate-General CONNECT Directorate A to monitor, supervise, and enforce AI Act requirements on general-purpose AI (GPAI) models (and systems, the user-facing application, when it is the same provider as the model) across the EU.
- The AI Office will have 140 employees, including 60 current commission staff. The hope is to fill the remaining 80 positions by the end of 2025. Additional technology specialists, lawyers, economists, and administrators will be hired in the coming weeks and month.
- The AI Office will:
- Analyse and raise awareness of emerging risks from GPAI development and deployment.
- Conduct model evaluations.
- Investigate non-compliance.
- Produce voluntary codes of practice for model providers.
- Lead international AI governance cooperation.
- Strengthen networks between the Commission, AI safety institutes in other jurisdictions, and the global scientific community, including through the EU Scientific Panel of Independent Experts.
- Support Member State enforcement cooperation and joint investigations.
- Assist the Commission in preparing binding decisions and secondary legislation in relation to the AI Act.
Structure
The AI Act is the first horizontal hard regulation of its kind anywhere in the world. Unlike other AI safety institutes, such as in the US and UK, the AI Office has enforcement powers to compel non-compliant providers to take corrective measures. Its broader competences are reflected in its structure:
- Head of Office: Lucilla Sioli
- 5 units:
- Excellence in AI and Robotics:
- Headed by Cecile Huet.
- This team will focus on R&D and the intersection of software and hardware.
- Regulation and Compliance:
- Headed by Kilian Gross.
- This team will work closely with Member States to ensure a coherent application of the AI Act across the EU.
- AI Safety:
- Leader has not been appointed.
- This team will focus on model evaluations for GPAI models with systemic risk and will work with industry and other stakeholders to identify systemic risks and appropriate mitigation measures.
- AI Innovation and Policy Coordination:
- Headed by Malgorzata Nikowska.
- This team will monitor trends and investments to foster innovation.
- AI for Societal Good:
- Headed by Martin Bailey.
- This team will focus on beneficial applications, such as weather modelling, cancer diagnoses and digital twins for reconstruction.
- Excellence in AI and Robotics:
- 2 advisors:
- The Lead Scientific Advisor has yet to be appointed, but will focus on expertise for GPAI model oversight.
- Juha Heikkilä, The Advisor for International Affairs, will represent the AI Office in global conversations on convergence toward common approaches.
Access to models for evaluations
- The AI Office can request documentation and evaluation results from GPAI model providers to assess compliance.
- If inadequate, it can initiate a structured dialogue to gather more information on internal testing, safeguards against systemic risks, and measures to mitigate systemic risks.
- If still insufficient, it can conduct model evaluations to assess compliance, or investigate systemic risks, especially after a qualified alert from the scientific panel, but also possibly following a compliant from a downstream deployer.
- Providers can be fined for failing to provide access.
- The AI Office can also request providers to take corrective actions, implement mitigation measures for systemic risks, or restrict, recall, or withdraw models from the Single Market.
- The AI Office will develop tools, methodologies, and benchmarks for capabilities evaluations for GPAI models, particularly systemic models.
Leading on general-purpose AI standards globally through the codes of practice
- Between now and April 2025, the AI Office will develop codes of practice that will spell out how GPAI model developers can operationalise their requirements under the Regulation.
- All GPAI model providers may demonstrate compliance with their obligations if they voluntarily adhere to the codes of practice until European harmonised standards are published, compliance with which will also lead to a presumption of conformity.
- To develop the codes of practice, the AI Office may consult GPAI model providers, relevant national competent authorities, civil society, industry, academia, downstream providers, and independent experts.
The codes of practice will cover:
- Model evaluations, including conducting and documenting adversarial testing, to identify systemic risks.
- The identification of the type and nature of systemic risks at EU level, including their sources.
- The measures, procedures, and modalities for assessing, managing, and documenting systemic risks, proportionate to their severity and probability, and accounting for how they materialize throughout the value chain.
- Tracking, documenting, and reporting serious incidents and possible corrective measures.
- Ensuring the model has adequate cybersecurity and physical protections.
- The documentation developers should supply to the AI Office and national competent authorities to enable them to assess compliance, such as the tasks the model is intended to perform, key design choices, data and curation methodologies, acceptable use policies, etc. For systemic models (over 10^25 FLOPS), this information also includes evaluation results, a description of internal and/or external adversarial testing, model adaptations, including alignment and fine-tuning, etc.
- The documentation developers should supply to their customers or downstream deployers to ensure transparency about model capabilities and limitations, such as how the model interacts with other software, instructions for use, the technical means for how it can be integrated into applications.
- The codes will also cover the means to ensure the above documentation is kept up to date in light of market and technological developments.
- Policies that developers can establish to respect the EU Copyright Directive that allows rightsholders to opt out of text and data mining (TDM) of their works.
- The training data template developers should use to publish a sufficiently detailed summary about the content used for model training, which should be generally comprehensive in its scope, instead of technically detailed, to facilitate parties with legitimate interests in exercising their rights, particularly in relation to copyright.
- Watermarking techniques for labelling AI-generated content.
Working with EU and international partners
- The AI Office represents the European Commission on all AI matters, liaising with Member State authorities, other relevant DGs and Commission services, particularly the European Centre for Algorithmic Transparency for GPAI model evaluation.
- It will lead European efforts to contribute to international cooperation on AI governance and safety.
Working with the scientific and open source communities
- The Commission will select a scientific panel of independent experts that will advise and support the AI Office on:
- Implementation and enforcement of GPAI models and systems.
- Development of tools, methodologies, and benchmarks for evaluating GPAI capabilities.
- Classification of different GPAI models and systems, including systemic GPAI models.
- Development of tools and templates.
- The panel can alert the AI Office if a GPAI model meets the threshold for systemic risk, or poses a systemic risk even without reaching the threshold. The AI Office can then decide to designate the model as such, imposing additional obligations on the provider.
- The AI Office will also establish a forum to collaborate with the open-source community on developing best practices for the safe use and development of open-source AI models and systems.