AI regulatory sandboxes are an important part of the implementation of the EU AI Act. According to Article 57 of the AI Act, each Member State must establish at least one AI regulatory sandbox at the national level by 2 August 2026. This post provides an overview of how different EU Member States are approaching the design and implementation of these sandboxes, as well as of EU-wide initiatives that support them.
This resource is a work in progress, and will be updated when new information is available. Please help us ensure the completeness and accuracy of this content by contributing any information you have about the authorities in your area: santeri@futureoflife.org.
AI regulatory sandboxes create controlled environments where AI systems can be developed and tested with regulatory guidance before market release. They improve legal certainty, support compliance, allow for processing of personal data, and facilitate market access for SMEs and startups. Importantly, the documentation from participating in a sandbox can be used to demonstrate compliance with the AI Act. Further, providers will not face administrative fines for infringements of the Act, as long as they follow the guidance of the national competent authority. Note that providers remain liable for damages to third parties caused by experimentation with AI systems in a sandbox.
Lessons from other fields highlight some potential positive impacts of regulatory sandboxes. For instance, companies that completed successful testing within the UK FCA sandbox received 6.6 times more fintech investment than their peers. Further, the UK FCA sandbox reduced the average time required for market authorisation by 40% compared to the regulator’s typical approval process.
The implementation status of the sandboxes varies significantly across Member States. Some, such as Denmark, have operational sandboxes and concrete plans, while others remain in early planning stages. Institutional approaches also differ: in some Member States, data protection authorities are leading the effort; elsewhere, new centralised AI agencies are being established. Some Member States are opting for decentralised models that coordinate existing regulators.
Quick summary of AI regulatory sandboxes under Articles 57-59:
- AI regulatory sandboxes are frameworks for testing AI systems in controlled environments that foster innovation and facilitate development, training, testing, and validation before market entry.
- Sandboxes aim to improve legal certainty, support sharing of best practices, foster innovation, contribute to evidence-based regulatory learning, and facilitate market access, particularly for SMEs and startups. Providers may use documentation from participating in a sandbox to demonstrate their compliance with the EU AI Act.
- Each Member State must establish at least one AI regulatory sandbox by 2 August 2026. The national sandbox may also be done jointly with other Member States.
- National competent authorities provide guidance, supervision, and support to identify risks and ensure compliance with the AI Act and other relevant legislation.
- Providers participating in sandboxes remain liable under applicable liability laws but are protected from administrative fines if they follow sandbox guidelines in good faith.
- Providers may process personal data in sandboxes for projects serving the public interest if the data is necessary, kept secure, not shared externally, and deleted after use. They must manage risks, document activities, and publish a summary unless sensitive law enforcement data is involved.
- National competent authorities must coordinate their activities through the AI Board and submit annual reports on sandbox implementation.
- SMEs and startups can access AI sandboxes free of charge, though national authorities may recover fair and proportionate exceptional costs.
EU-wide support initiatives
At the EU level, several initiatives are underway to support the implementation of AI regulatory sandboxes across Member States. Their role is specified in the EU AI Act: Article 58(3) states that prospective providers in the AI regulatory sandboxes, in particular SMEs and start-ups, shall be directed, where relevant, to value-adding services such as Testing and Experimentation Facilities and European Digital Innovation Hubs. This connection is important because regulatory sandboxes are not intended solely for supporting compliance with the AI Act, but also to foster the development and testing of AI systems. This includes providing innovators with access to training, technical expertise, and infrastructure. Since the EU has already invested substantial funding for these purposes, it is important to ensure connections between regulatory sandboxes and existing instruments.
The EU Regulatory Sandboxes for AI (EUSAiR)
One of the key initiatives is the EU Regulatory Sandboxes for AI (EUSAiR), a two-year project funded by the European Union’s Digital Europe programme working in cooperation with the AI Office. EUSAiR aims to support the implementation of AI regulatory sandboxes by developing common frameworks, enhancing technical and legal capacities, and promoting collaboration among Member States. It aims to provide broad access to sandboxes for AI innovators, especially SMEs and startups, by lowering compliance costs and easing entry barriers to the market.
Testing and Experimentation Facilities (TEFs)
The EU has established specialised Testing and Experimentation Facilities (TEFs) that offer large-scale reference sites where technology providers across Europe can test state-of-the-art AI solutions in real-world environments. These projects will receive over €220 million in combined funding from the European Commission and Member States for a five-year period. These facilities support supervised testing and experimentation in cooperation with national authorities and can contribute to the implementation of regulatory sandboxes. Four sector-specific TEFs have been established:
- Agri-Food: project ‘agrifoodTEF’
- Healthcare: project ‘TEF-Health’
- Manufacturing: project ‘AI-MATTERS’
- Smart Cities & Communities: project ‘Citcom.AI’
European Digital Innovation Hubs (EDIHs)
European Digital Innovation Hubs (EDIHs) are regional one-stop shops that help companies and public sector organisations respond to digital challenges and improve their competitiveness. They offer access to technical expertise and testing (as well as a possibility to ‘test before invest’), innovation services (such as financial advice), and skills development. There are over 150 hubs operating across the EU.
The 2025 AI Continent Action Plan highlights the role of the EDIH network in facilitating companies’ access to regulatory sandboxes. It also mentions that EDIHs will expand their offering of practical AI training courses tailored to various technical and non-technical backgrounds.
National implementation approaches
The AI Act gives Member States considerable flexibility in designing their regulatory sandboxes. Some Member States are creating centralised approaches with dedicated AI agencies, while others are adopting decentralised models that leverage existing regulatory bodies. Even where a Member State has yet to announce sandbox plans, responsibility rests with its national competent authority under the AI Act. Overview of all national implementation plans, including competent authorities, can be found here.
This resource tracks each Member State’s approach to implementing AI regulatory sandboxes, examining key aspects including:
- Overview: The current status and general approach to AI regulatory sandboxes.
- Key actors: Organisations responsible for sandbox design, implementation, and oversight.
- Legal framework: Laws, regulations, and policies establishing and governing the sandbox.
- Operational structure: How the sandbox functions, including the admission process, testing process, and the assistance provided.
The resource builds upon Nathan Genicot’s report ‘From Blueprint to Reality: Implementing AI Regulatory Sandboxes under the AI Act’ (2024).