Every EU Member State must establish at least one AI regulatory sandbox by 2 August 2026. Sandboxes let you develop, train, test, and validate AI systems in a controlled environment backed by regulatory oversight — with no administrative fines for good-faith participation. SMEs get priority access for free. A successful sandbox exit can accelerate your conformity assessment.
An opportunity, not a penalty: why a sandbox is worth considering
Need to check if your AI system is high-risk before applying for a sandbox?
Regumatrix classifies your system against Annex III and Art 5, confirms whether sandbox participation is a viable path, and identifies all obligations you need to work through during testing.
Check your AI system — 3 free analysesArt 57(5) — definition
An AI regulatory sandbox provides a controlled environment for the development, training, testing, and validation of innovative AI systems for a limited time before placement on the market or putting into service. It is established and managed by one or more competent authorities and may include real-world testing components.
Who manages the sandbox
One or more national competent authorities (the market surveillance authority, the data protection authority, or a joint body). Each Member State must designate the responsible body.
Who can participate
Any provider or prospective provider of an AI system meeting the eligibility criteria. SMEs and start-ups receive priority access (Art 62). Natural persons wishing to use an AI system they developed can also participate.
What you can do
Develop, train, validate, and test your AI system; process personal data for AI development purposes (Art 59); conduct real-world testing (Art 60). The scope is set out in the sandbox plan agreed with the competent authority.
How long
Article 57 does not fix a maximum sandbox duration — the duration is agreed in the plan. Real-world testing outside a sandbox under Art 60 is capped at 6 + 6 months (tacit approval). The competent authority can extend.
Legal certainty
Gain clarity on how the AI Act applies to your specific system before you invest in full compliance infrastructure.
Best practices
Work alongside the competent authority to define what good practice looks like for your use case and category.
Innovation support
Develop novel AI applications with regulatory engagement rather than compliance risk acting as a blocker.
Regulatory learning
Regulators learn from real cases, feeding into future guidelines and common specifications.
Market access for SMEs
The sandbox is specifically designed to reduce the time-to-market barrier for smaller innovators — particularly through the free and priority-access rules.
Article 58 sets out the detailed arrangements for sandbox operation. Member States are required to publish a dedicated application process.
Submit your sandbox application
Apply to the national competent authority responsible for your sector. Include a description of the AI system, its intended purpose, risk classification, the specific questions you want to resolve in the sandbox, and your proposed testing plan.
Wait for decision — 3-month window
Art 58(2)(a) requires the competent authority to assess your application and communicate a reasoned decision within 3 months. If you do not receive a decision within 3 months, you may request a review.
Agree the sandbox plan
A detailed plan is agreed between you and the competent authority: scope of testing, data to be used, safeguards, milestones, duration, and the specific obligations you are working to satisfy during the period.
Carry out development and testing
Operate within the sandbox under the plan. You have access to the competent authority's guidance. Art 57(12) protects you from administrative fines for conduct that follows the plan and competent authority guidance in good faith.
Receive exit report
On completion, the competent authority issues an exit report (Art 57(7)) documenting results and experience. This report is taken positively into account by market surveillance authorities and notified bodies — directly supporting your conformity assessment.
Article 59 creates a special processing basis: personal data lawfully collected for other purposes can be processed in the sandbox environment for AI development, training, and testing — provided the conditions below are met. This unlocks datasets that would otherwise be unavailable under the GDPR.
Key conditions (Art 59 — 10 conditions summary):
The processing serves a significant public interest in AI development
The AI system is developed to protect one of the interests in Art 8(1) of the GDPR (public interest, etc.) or another specific significant interest
All standard GDPR principles apply (minimisation, purpose limitation, etc.)
Effective measures to pseudonymise personal data within the sandbox
Data subjects' rights are preserved
No personal data processed leaves the sandbox environment
Data is deleted on exit from the sandbox
Full audit trail of data processing maintained
No commercial use of the personal data beyond the scope of the plan
Competent DPA is consulted / involved in sandbox governance
Article 60 provides a separate mechanism: testing in real-world conditions before market placement, outside a full sandbox. This can be combined with or follow sandbox participation.
Who can use Art 60
Providers of Annex III high-risk AI systems. COM(2025) 836 extends this to Annex I Section A systems (medical devices, machinery) — see 836 section below.
How to start: real-world testing plan
Submit a real-world testing plan to the market surveillance authority of the Member State where testing will take place. The authority must raise any objection within 30 days. Silence = tacit approval.
Duration
Maximum 6 months, extendable by a further 6 months on application to the competent authority. Total maximum: 12 months.
Informed consent
Article 61 requires that participants (the users subject to real-world testing) give free, informed, and documented consent. They can withdraw at any time without any negative consequences.
Art 62 — Priority access and reduced fees for SMEs
Art 63 — Simplified QMS for microenterprises
Microenterprises (under 10 employees, turnover under €2M) may implement their Article 17 Quality Management System in a simplified manner taking their size into account, provided all QMS objectives are still met. This reduces the documentation burden without compromising compliance outcomes.
The Digital Omnibus proposal (COM(2025) 836) proposes significant enhancements to the sandbox framework.
① New EU-level sandbox — Art 57 new §3a
The AI Office would be empowered to establish and operate an EU-level AI regulatory sandbox for AI systems subject to Art 75(1) supervision (primarily General Purpose AI models of systemic risk). This EU sandbox would have priority access for SMEs alongside other eligible participants — offering innovators a European alternative to national sandboxes.
② Integrated real-world testing plan — Art 57(5) updated
The proposal updates Art 57(5) to clarify that the sandbox plan can be a single integrated document that includes real-world testing (Art 60). This reduces duplication if you are doing both controlled sandbox testing and real-world testing as part of the same programme.
③ Governance harmonisation — Art 58(1) replaced
Art 58(1) would be replaced with a new provision authorising the Commission to adopt implementing acts establishing governance rules, common formats, and harmonised procedures — reducing fragmentation between national sandbox regimes.
④ Real-world testing extended to Annex I Section A — Art 60 amended
836 extends the Art 60 real-world testing route to AI systems covered by Annex I Section A (medical devices, in vitro diagnostic devices, machinery, etc.) — not just Annex III systems. This expands sandbox-adjacent testing to product safety regulated AI.
⑤ SMCs added alongside SMEs — Art 57(9)(e) and Art 62
836 adds "small mid-cap companies" (SMCs — under 500 employees) alongside SMEs in several priority access and market access facilitation provisions, broadening the group that can benefit from sandbox measures.
No changes are proposed under COM(2025) 837 (Product Liability Omnibus) specifically for AI regulatory sandbox provisions.
Before applying to an AI regulatory sandbox, know exactly which Annex III category applies, all obligations you need to work through during testing, and whether your system qualifies for sandbox participation or the separate Art 60 real-world testing route.
Start free — no credit card3 free analyses included