Why Join the EU AI Scientific Panel?

16 Jun, 2025

The European Commission has published a call for applications for a scientific panel of independent experts. The panel focuses on general-purpose AI (GPAI) models and systems. Its tasks include advising the EU AI Office and national authorities on systemic risks, model classification, evaluation methodologies, and cross-border market surveillance. Further, the panel is empowered to alert the AI Office of emerging risks.

Applications are open until 14th of September.

Summary

  • The scientific panel plays an important role in enforcement of the EU AI Act related to general-purpose AI. It does so amongst other by providing advice, up-to-date insight into technical developments, and adopting qualified alerts for emerging systemic risks.
  • The panel will consist of up to 60 independent experts that are geographically representative of the Member States and gender balanced. Up to 20% of the experts may be from third countries.
  • The selection criteria include multidisciplinary expertise, independence, impartiality and objectivity, and professional capability.
  • The required expertise areas comprise model evaluation, risk assessment, technical mitigations, misuse and systemic risks, cyber offence risks, cybersecurity, emergent risks and compute measurement.
  • There are three eligibility requirements: PhD in a relevant field OR equivalent experience; proven professional experience and scientific impact in AI/GPAI research or AI impact studies; and demonstrated independence from AI system or GPAI model providers.
  • Experts will be remunerated for carrying out certain tasks and travel expenses may be covered.

Why work at the Scientific Panel of Independent Experts?

There are many reasons to contribute as an independent expert on the scientific panel. This panel will directly contribute to the enforcement of the EU AI Act, the most comprehensive AI governance framework globally, with sanctioning powers to ensure the safety and security of powerful general-purpose AI (GPAI) technologies across the largest single market in the world.

  • Provide evidence-based advice to the AI Office and national authorities to encourage and enable concrete public AI safety interventions to prevent and mitigate real-world harms from GPAI models and systems.
  • Play a central role in identifying and assessing high-impact GPAI models that warrant heightened oversight due to their scale, technical capabilities, or far-reaching societal implications.
  • Steer the EU’s proactive AI safety approach, addressing emerging systemic threats posed by GPAI models even before formal regulatory thresholds are triggered.
  • Shape the EU’s governance and technical understanding of rapidly evolving GPAI capabilities and related systemic risks, informing policy learnings and future regulatory approaches in Europe and beyond. 
  • Contribute to the development of benchmarks, tools, and methodologies that set EU standards for AI safety, robustness, and regulatory compliance.
  • Gain professional recognition and visibility as a trusted advisor while influencing the development, deployment, and impact of advanced GPAI in Europe.
  • Collaborate with and work alongside top experts from across the EU and beyond in a high-level, multidisciplinary setting.
  • Engage in mission-driven, public-interest work that promotes the responsible and safe development and deployment of AI technologies throughout Europe.

Scientific panel tasks, powers and competences in more detail

Overview 

The Scientific Panel, established by the EU AI Act (Article 68 point 1), provides technical advice to the AI Office and national authorities on enforcing AI Act requirements for general-purpose AI models and systems. Key tasks include:

  • Developing assessment tools and methodologies,
  • Advising on model classification including systemic risk determination,
  • Creating tools and templates, and
  • Supporting market surveillance activities.

The panel can request information from AI model providers through the Commission while protecting trade secrets, and Commission experts may conduct model evaluations for the AI Office.

Qualified Alerts 

The panel can issue alerts (Article 90) that may require providers to conduct safety assessments and implement protective measures when models create EU-level risks, affect multiple member states, or meet systemic risk criteria (which are detailed in Article 51: requiring 10^25+ FLOPS to train, demonstrating high-impact capabilities, or equivalent capabilities based on parameters, dataset size, market reach of 10,000+ EU business users, etc.).

Alerts require simple majority approval and must include provider contact information, justification, and relevant facts. The AI Office has two weeks to process alerts and may designate models as systemically risky, subjecting providers to additional requirements (Article 55), including risk assessments, incident reporting, and cybersecurity obligations.

Structure and Composition 

The panel will comprise of up to 60 independent experts serving two-year renewable terms. Gender balance and geographical representation is ensured by having at least one expert per EU Member State and EFTA/EEA country (maximum three per country). At a minimum, 80% of the experts will be from EU/EFTA/EEA nations.

The structure includes a Secretariat for administrative support, a Chair and Vice-Chair for the full term, and compensated rapporteurs and contributors for specific tasks. The panel operates transparently by publicly sharing expert information, opinions, recommendations, and stakeholder hearing records while protecting confidential business information.

Selection Criteria

According to the AI Act experts must demonstrate:

  • Multidisciplinary expertise – Up-to-date scientific, technical, or socio-technical knowledge related to AI systems, general-purpose AI models, or AI impacts relevant to enforcing the AI Act;
  • Independence – No conflicts of interest with AI systems or general-purpose AI model providers;
  • Impartiality and objectivity – Ability to work without bias as required by the AI Act;
  • Professional capability – Ability to carry out work diligently, accurately, and objectively.

Required Expertise Areas

According to the call for applications, candidates must have substantiated expertise in at least one area:

  • Model evaluation – Benchmarking, red teaming, human impact studies, deployment evaluations, fundamental rights assessments, economic impact analysis, AI capability forecasting;
  • Risk assessment – Risk identification, analysis, modelling, evaluation, safety cases, risk taxonomies for AI and GPAI models;
  • Technical mitigations – Safety fine-tuning, filters, guardrails, watermarking, data processing, risk management policies, scaling policies, incident response;
  • Misuse and systemic risks – CBRN risks, population manipulation, loss of control, discrimination, public health/safety risks, fundamental rights risks;
  • Cyber offence risks – Vulnerability exploitation, zero-day generation, social engineering, autonomous cyber operations, infrastructure targeting;
  • Cybersecurity – Preventing model weight theft, unauthorised releases, safety measure circumvention;
  • Emergent risks – Misalignment, deceptive behaviours, loss of control, emergent capabilities like self-replication;
  • Compute measurement – Methodologies for measuring training compute, reporting frameworks, verification protocols.

Eligibility Requirements

  • PhD in relevant field OR equivalent experience;
  • Proven professional experience and scientific impact in AI/GPAI research or AI impact studies;
  • Demonstrated independence from AI system or GPAI model providers.

Remuneration

Experts will be remunerated if they are appointed rapporteur or contributor for carrying out tasks that have been requested by the AI Office. The Commission may reimburse experts’ travel expenses and, if needed, subsistence costs related to panel duties, within available budget limits.

This post was published on 16 Jun, 2025

Related articles

AI Regulatory Sandbox Approaches: EU Member State Overview

AI regulatory sandboxes are an important part of the implementation of the EU AI Act. According to Article 57 of the AI Act, each Member State must establish at least one AI regulatory sandbox at the national level by 2 August 2026. This post provides an overview of...