RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceAI Regulatory Sandboxes
Arts 57–63 — Innovation SupportOperational by 2 August 2026Free for SMEs — priority access

EU AI Act Regulatory Sandboxes (Arts 57–63)

Every EU Member State must establish at least one AI regulatory sandbox by 2 August 2026. Sandboxes let you develop, train, test, and validate AI systems in a controlled environment backed by regulatory oversight — with no administrative fines for good-faith participation. SMEs get priority access for free. A successful sandbox exit can accelerate your conformity assessment.

An opportunity, not a penalty: why a sandbox is worth considering

  • • No administrative fines if you act in good faith per your sandbox plan (Art 57(12))
  • • Exit report accepted positively for conformity assessment and notified bodies (Art 57(7))
  • • Competent authority provides guidance and regulatory certainty during development (Art 57(9))
  • • SME priority access, free of charge (Art 58(2)(d), Art 62(1))
  • • Special basis to process personal data that would otherwise be unavailable (Art 59)

Need to check if your AI system is high-risk before applying for a sandbox?

Regumatrix classifies your system against Annex III and Art 5, confirms whether sandbox participation is a viable path, and identifies all obligations you need to work through during testing.

Check your AI system — 3 free analyses

What is an AI regulatory sandbox? (Art 57)

Art 57(5) — definition

An AI regulatory sandbox provides a controlled environment for the development, training, testing, and validation of innovative AI systems for a limited time before placement on the market or putting into service. It is established and managed by one or more competent authorities and may include real-world testing components.

Who manages the sandbox

One or more national competent authorities (the market surveillance authority, the data protection authority, or a joint body). Each Member State must designate the responsible body.

Who can participate

Any provider or prospective provider of an AI system meeting the eligibility criteria. SMEs and start-ups receive priority access (Art 62). Natural persons wishing to use an AI system they developed can also participate.

What you can do

Develop, train, validate, and test your AI system; process personal data for AI development purposes (Art 59); conduct real-world testing (Art 60). The scope is set out in the sandbox plan agreed with the competent authority.

How long

Article 57 does not fix a maximum sandbox duration — the duration is agreed in the plan. Real-world testing outside a sandbox under Art 60 is capped at 6 + 6 months (tacit approval). The competent authority can extend.

Five objectives of AI regulatory sandboxes — Art 57(9)

Legal certainty

Gain clarity on how the AI Act applies to your specific system before you invest in full compliance infrastructure.

Best practices

Work alongside the competent authority to define what good practice looks like for your use case and category.

Innovation support

Develop novel AI applications with regulatory engagement rather than compliance risk acting as a blocker.

Regulatory learning

Regulators learn from real cases, feeding into future guidelines and common specifications.

Market access for SMEs

The sandbox is specifically designed to reduce the time-to-market barrier for smaller innovators — particularly through the free and priority-access rules.

How to apply and what to expect — Art 58

Article 58 sets out the detailed arrangements for sandbox operation. Member States are required to publish a dedicated application process.

01

Submit your sandbox application

Apply to the national competent authority responsible for your sector. Include a description of the AI system, its intended purpose, risk classification, the specific questions you want to resolve in the sandbox, and your proposed testing plan.

02

Wait for decision — 3-month window

Art 58(2)(a) requires the competent authority to assess your application and communicate a reasoned decision within 3 months. If you do not receive a decision within 3 months, you may request a review.

03

Agree the sandbox plan

A detailed plan is agreed between you and the competent authority: scope of testing, data to be used, safeguards, milestones, duration, and the specific obligations you are working to satisfy during the period.

04

Carry out development and testing

Operate within the sandbox under the plan. You have access to the competent authority's guidance. Art 57(12) protects you from administrative fines for conduct that follows the plan and competent authority guidance in good faith.

05

Receive exit report

On completion, the competent authority issues an exit report (Art 57(7)) documenting results and experience. This report is taken positively into account by market surveillance authorities and notified bodies — directly supporting your conformity assessment.

Processing personal data in sandboxes — Art 59

Article 59 creates a special processing basis: personal data lawfully collected for other purposes can be processed in the sandbox environment for AI development, training, and testing — provided the conditions below are met. This unlocks datasets that would otherwise be unavailable under the GDPR.

Key conditions (Art 59 — 10 conditions summary):

The processing serves a significant public interest in AI development

The AI system is developed to protect one of the interests in Art 8(1) of the GDPR (public interest, etc.) or another specific significant interest

All standard GDPR principles apply (minimisation, purpose limitation, etc.)

Effective measures to pseudonymise personal data within the sandbox

Data subjects' rights are preserved

No personal data processed leaves the sandbox environment

Data is deleted on exit from the sandbox

Full audit trail of data processing maintained

No commercial use of the personal data beyond the scope of the plan

Competent DPA is consulted / involved in sandbox governance

Real-world testing outside the sandbox — Art 60

Article 60 provides a separate mechanism: testing in real-world conditions before market placement, outside a full sandbox. This can be combined with or follow sandbox participation.

Who can use Art 60

Providers of Annex III high-risk AI systems. COM(2025) 836 extends this to Annex I Section A systems (medical devices, machinery) — see 836 section below.

How to start: real-world testing plan

Submit a real-world testing plan to the market surveillance authority of the Member State where testing will take place. The authority must raise any objection within 30 days. Silence = tacit approval.

Duration

Maximum 6 months, extendable by a further 6 months on application to the competent authority. Total maximum: 12 months.

Informed consent

Article 61 requires that participants (the users subject to real-world testing) give free, informed, and documented consent. They can withdraw at any time without any negative consequences.

SME and microenterprise measures — Arts 62 & 63

Art 62 — Priority access and reduced fees for SMEs

  • • Art 62(1)(a): SMEs and start-ups must be given priority access to national regulatory sandboxes
  • • Art 58(2)(d): Participation is free for SMEs (including start-ups) except in justified exceptional circumstances
  • • Art 62(2): Conformity assessment fees for notified bodies must be reduced for SMEs proportionate to their size
  • • Member States must establish dedicated information channels and helpdesks for SMEs

Art 63 — Simplified QMS for microenterprises

Microenterprises (under 10 employees, turnover under €2M) may implement their Article 17 Quality Management System in a simplified manner taking their size into account, provided all QMS objectives are still met. This reduces the documentation burden without compromising compliance outcomes.

PROPOSALCOM(2025) 836 — not yet enacted law

Proposed changes to regulatory sandboxes

The Digital Omnibus proposal (COM(2025) 836) proposes significant enhancements to the sandbox framework.

① New EU-level sandbox — Art 57 new §3a

The AI Office would be empowered to establish and operate an EU-level AI regulatory sandbox for AI systems subject to Art 75(1) supervision (primarily General Purpose AI models of systemic risk). This EU sandbox would have priority access for SMEs alongside other eligible participants — offering innovators a European alternative to national sandboxes.

② Integrated real-world testing plan — Art 57(5) updated

The proposal updates Art 57(5) to clarify that the sandbox plan can be a single integrated document that includes real-world testing (Art 60). This reduces duplication if you are doing both controlled sandbox testing and real-world testing as part of the same programme.

③ Governance harmonisation — Art 58(1) replaced

Art 58(1) would be replaced with a new provision authorising the Commission to adopt implementing acts establishing governance rules, common formats, and harmonised procedures — reducing fragmentation between national sandbox regimes.

④ Real-world testing extended to Annex I Section A — Art 60 amended

836 extends the Art 60 real-world testing route to AI systems covered by Annex I Section A (medical devices, in vitro diagnostic devices, machinery, etc.) — not just Annex III systems. This expands sandbox-adjacent testing to product safety regulated AI.

⑤ SMCs added alongside SMEs — Art 57(9)(e) and Art 62

836 adds "small mid-cap companies" (SMCs — under 500 employees) alongside SMEs in several priority access and market access facilitation provisions, broadening the group that can benefit from sandbox measures.

No changes are proposed under COM(2025) 837 (Product Liability Omnibus) specifically for AI regulatory sandbox provisions.

Frequently asked questions

What is an EU AI Act regulatory sandbox?
Article 57(5) of the EU AI Act defines an AI regulatory sandbox as a controlled environment established and managed by a competent authority. It allows AI providers and prospective providers to develop, train, test, and validate AI systems before placing them on the market or putting them into service. The sandbox operates under a plan agreed with the competent authority and may include real-world testing. The key benefit: Article 57(12) states that no administrative fines apply to participants if they follow the sandbox plan and guidance in good faith.
When will EU AI Act sandboxes be available?
Member States must establish at least one AI regulatory sandbox and have it operational by 2 August 2026 under Article 57(1). Some Member States (Germany via the 'DAKI sandbox', Finland, the Netherlands) already operate precursor sandboxes. COM(2025) 836 proposes that the AI Office can also establish an EU-level sandbox for AI systems subject to Article 75(1) supervision — this would give SMEs access to a European-level facility.
Is the AI regulatory sandbox free for SMEs?
Yes. Article 58(2)(d) states that participation in the regulatory sandbox shall be free of charge for SMEs, including start-ups. Competent authorities may charge fees in exceptional and duly justified circumstances, provided those fees are fair and proportionate. Article 62(1)(a) further requires that SMEs be given priority access to national AI regulatory sandboxes, without excluding other participants. COM(2025) 836 adds 'small mid-cap companies' (SMCs) alongside SMEs in several provisions.
Can I process personal data in a regulatory sandbox?
Yes, under strict conditions. Article 59 allows personal data lawfully collected for other purposes to be processed in the sandbox for the purpose of developing, training, or testing high-risk AI systems — but only where this serves a significant public interest, and subject to 10 specific safeguards including pseudonymisation, data minimisation, prohibition on onward transfer outside the sandbox, deletion of personal data after the sandbox period, and approval by data protection authorities. This creates a special processing basis for sandbox participants that does not require a separate legitimate interest assessment under the GDPR.
What is the sandbox exit report and why does it matter?
Article 57(7) requires the competent authority to issue an exit report when a participant leaves the sandbox. This report documenting the results and experience of the sandbox period is taken positively into account by both market surveillance authorities and notified bodies when assessing the AI system's conformity. Practically, a successful sandbox exit report can streamline and accelerate conformity assessment — especially valuable for novel AI systems where harmonised standards don't yet exist.
What is real-world testing under Article 60?
Article 60 allows providers of Annex III high-risk AI systems to test in real-world conditions outside a laboratory or sandbox setting — in other words, with actual end-users. The provider submits a real-world testing plan to the market surveillance authority. If no objection is raised within 30 days (tacit approval), testing can proceed for up to 6 months, extendable by another 6 months. COM(2025) 836 extends this route to systems covered by Annex I Section A (such as medical devices and machinery) — not just Annex III systems.

Related compliance guides

Conformity Assessment (Art 43)Is My AI High-Risk? (Full Checklist)COM(2025) 836 — Digital Omnibus SummaryMarket Surveillance & EnforcementEU AI Act Key Dates & DeadlinesRisk Management System (Art 9)

Prepare for sandbox entry in 30 seconds

Before applying to an AI regulatory sandbox, know exactly which Annex III category applies, all obligations you need to work through during testing, and whether your system qualifies for sandbox participation or the separate Art 60 real-world testing route.

Start free — no credit card

3 free analyses included