RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
  1. Home
  2. /
  3. Compliance
  4. /
  5. Scope & Exclusions
Article 2 — ScopeBroad extraterritorial reach + specific carve-outs

EU AI Act Scope & Exclusions: Who Does It Apply To?

Article 2 of the EU AI Act establishes broad extraterritorial reach: any provider placing an AI system on the EU market is in scope, regardless of where it is established. But Article 2 also contains eight specific exclusions — military use, scientific R&D, open-source licences, and personal non-professional activity among them. Before applying the obligation chain, determine precisely whether your organisation and system are within scope.

Why scope analysis matters before any other compliance step

  • ▸Many organisations in third countries incorrectly assume they are out of scope — Article 2(1)(c) reaches any provider whose system's output is used in the EU
  • ▸The open-source exclusion in Article 2(12) has three carve-outs that catch most commercial open-source deployments
  • ▸Incorrectly applying the R&D exclusion to in-market testing activity removes the exclusion entirely (Art 2(8) expressly excludes real-world testing)
  • ▸Transitional provisions in Article 111 give existing high-risk systems a grace period — but only if the system has not undergone significant design changes

Determine whether your AI system is in scope — 30 seconds

Regumatrix walks through Article 2 scope assessment, checks all exclusion conditions, verifies transitional provision applicability, and then classifies the risk tier and obligation chain.

Check my AI system — 3 free analyses

Who is subject to the AI Act — Article 2(1)

Article 2(1) defines seven categories of actors who are subject to the Regulation. All seven are in scope; the question is which specific obligations apply to each role.

Art 2(1)(a) — Providers

Providers placing AI systems on the EU market or putting them into service, OR providers of general-purpose AI models in the EU — regardless of whether they are established in the EU or in a third country.

Art 2(1)(b) — Deployers

Deployers with their place of establishment or location in the EU. Unlike providers, this is establishment-based — a US-based company deploying an AI system for EU customers from US servers is a provider but may not be a deployer under this limb.

Art 2(1)(c) — Third-country providers and deployers

Providers AND deployers established in third countries where the AI system's output is used in the EU. This is the extraterritoriality limb: if an AI system processes data and produces outputs consumed in the EU, the provider and deployer are in scope.

Art 2(1)(d) — Importers and distributors

Importers and distributors of AI systems — these are supply chain entities that bring non-EU AI systems into the EU market or make non-EU systems available in the EU.

Art 2(1)(e) — Product manufacturers

Product manufacturers who place an AI system on the market or put it into service under their own name or trademark as part of their product.

Art 2(1)(f) — Authorised representatives

Authorised representatives of non-EU providers — legal entities established in the EU appointed to act on behalf of non-EU providers, giving EU regulators a point of contact.

Art 2(1)(g) — Affected persons in EU

Affected persons located in the EU. This limb establishes that persons who are subject to the output or decision of an AI system located in the EU are within the protective scope of the Regulation.

Special case: Annex I product safety AI (Article 2(2))

AI systems that are safety components of products regulated under the Union harmonisation legislation listed in Annex I (machinery, medical devices, lifting equipment, gas appliances, etc.) have a reduced scope of the AI Act obligations. Only Article 6(1), Arts 102–109 (conformity assessment and notified body rules), and Article 112 apply. All other AI Act obligations — including the broader Chapter III risk management requirements — do not apply independently of those Annex I product safety regulations.

The eight exclusions — Article 2(3)–(12)

Each exclusion has specific conditions. None are blanket exemptions — misapplying them when the conditions are not met means operating outside the law.

Art 2(3) — Military / defence / national security

Absolute

AI systems placed on the market, put into service, or used exclusively for military, defence or national security purposes are entirely excluded — regardless of whether this is carried out by a public or private entity. There is no condition of proportionality or necessity here; the exclusion is unconditional.

Art 2(4) — Third-country public authorities and international organisations

With conditions

Third-country public authorities or international organisations acting in the framework of international agreements with the EU on law enforcement and judicial cooperation are excluded — provided adequate safeguards for the protection of fundamental rights and freedoms of individuals are provided for.

Art 2(5) — DSA intermediary services

Clarification

Providers of intermediary services under Chapter II of the Digital Services Act are not affected — the AI Act does not add to or derogate from their DSA obligations. This is a clarification, not an exclusion from the AI Act entirely.

Art 2(6) — Scientific R&D

Purpose-limited

AI systems specifically developed and put into service solely for scientific research and development are excluded. If the same system is later placed on the market or used for non-R&D purposes, the exclusion no longer applies.

Art 2(7) — GDPR and confidentiality law unaffected

Clarification

The AI Act does not override GDPR, the e-Privacy Directive, or other Union data protection rules, nor does it affect obligations of professional secrecy (e.g., legal privilege). Both the AI Act and data protection law apply concurrently.

Art 2(8) — Pre-market R&D testing

Pre-market only

Research, testing or development of AI systems or models prior to being placed on the market or put into service is excluded. CRITICAL: testing in real-world conditions is expressly NOT excluded — it is a separate activity governed by Arts 57–63 (regulatory sandboxes and real-world testing rules).

Art 2(9) — Consumer protection and product safety law

Clarification

The AI Act does not affect consumer protection and product safety legislation. Both apply concurrently — an AI system that constitutes a defective product for product liability purposes remains so regardless of AI Act compliance.

Art 2(10) — Personal non-professional activity

Deployers only; personal only

Natural persons deploying AI systems in the course of a purely personal, non-professional activity are excluded as deployers. This is a deployer-only exclusion: providers placing those systems on the market are still fully in scope.

Art 2(11) — Worker protection

More favourable rules allowed

Member States may maintain or introduce more favourable worker protection rules than those in this Regulation. This is a floor, not a ceiling — national employment law can be stricter.

Art 2(12) — Open-source AI

Three carve-outs apply

AI systems released under free and open-source licences are excluded — UNLESS: (a) placed on the market or put into service as high-risk (Art 6); (b) placed on the market or put into service as falling under Art 5 (prohibited practices); or (c) placed on the market or put into service as falling under Art 50 (transparency obligations, including chatbot disclosure and deep fake marking). Open-source models used in a chatbot or high-risk product lose this exclusion.

What counts as an "AI system" — Article 3(1)

Before applying scope or exclusion analysis, confirm that the technology in question actually constitutes an "AI system" within the meaning of Article 3(1).

Official Article 3(1) definition:

"A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

Key elements of the definition

  • ▸Machine-based: implemented in software or hardware
  • ▸Varying autonomy: need not be fully autonomous — even supervised AI qualifies
  • ▸May exhibit adaptiveness: not required to actually adapt — the design capacity is sufficient
  • ▸Infers outputs from inputs: the key distinguishing element — not explicit rule-following but inference from data
  • ▸Outputs that can influence environments: predictions, content, recommendations, or decisions affecting the real or virtual world

What is NOT an AI system under Article 3(1)

Traditional rules-based software, lookup tables, pure automation workflows, simple calculators, and any system that outputs a pre-defined result via explicit logic (rather than inference) do not qualify. The definition targets systems that learn, reason about, or predict — not systems that execute deterministic code.

Transitional provisions for existing systems — Article 111

Article 111 provides grace periods for AI systems that were already on the market before key dates. These provisions only apply if the system has not undergone significant design changes.

Art 111(1) — Large-scale IT systems (Annex X)

31 December 2030(~5 years away)

AI systems that are components of large-scale IT systems listed in Annex X (including SIS II, Eurodac, EES, ETIAS, VIS, ECRIS-TCN) that were placed on the market or put into service before 2 August 2027 must comply by 31 December 2030.

Art 111(2) — High-risk AI (existing deployments)

2 August 2030(~4 years away)

High-risk AI systems placed on the market or put into service before 2 August 2026 are not subject to the Act — unless they undergo a significant change in design after that date. Public authority deployers of such systems must comply by 2 August 2030, regardless of whether the design changed.

Art 111(3) — GPAI models (pre-market)

2 August 2027(17 months away)

General-purpose AI models placed on the market before 2 August 2025 must comply with the GPAI obligations in Chapter V by 2 August 2027.

No changes are proposed under COM(2025) 836 or COM(2025) 837 for this topic. Note: COM(2025) 836 Art 1(30) does add a transitional provision specifically for Article 50(2) watermarking — see the Deepfakes & Synthetic Content guide for that detail.

Frequently Asked Questions

Does the EU AI Act apply to companies outside the EU?+

Yes. Article 2(1)(a) explicitly covers providers placing AI systems or general-purpose AI models on the EU market regardless of where they are established. Article 2(1)(c) covers providers and deployers established in third countries where the AI system's output is used in the EU. The Act has strong extraterritorial reach — if your system's output is used by EU-based persons, you are likely in scope as a provider or deployer.

What are the main exclusions from the EU AI Act?+

Article 2(3)-(12) sets out eight exclusions: (1) military, defence and national security — completely excluded even if carried out by private entities; (2) scientific R&D purpose systems; (3) pre-market R&D and testing — but real-world testing is NOT excluded; (4) third-country public authorities with adequate fundamental rights safeguards; (5) purely personal non-professional activity deployers; (6) Regulation does not override GDPR or confidentiality law; (7) consumer protection and product safety law is not affected; (8) free and open-source AI — excluded UNLESS the system is placed on the market as high-risk or falls under Article 5 or Article 50. None of these exemptions remove Article 50 transparency obligations if the system is placed on the market or into service in a covered context.

What is the legal definition of an 'AI system' under the EU AI Act?+

Article 3(1) defines an AI system as 'a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' This definition is deliberately broad. Traditional rules-based software, lookup tables, and pure automation without inference do not constitute AI systems. The key test is whether the system infers outputs from inputs rather than following explicit, pre-defined rules.

Do transitional provisions mean I don't need to comply yet if my AI system is already on the market?+

It depends on the category of your system. High-risk AI systems placed on the market before 2 August 2026 benefit from a grace period — they only need to comply if the system undergoes a significant change in design after that date. However, public authority deployers of such systems must comply by 2 August 2030 regardless. GPAI models placed on the market before 2 August 2025 must comply by 2 August 2027. Large-scale IT systems listed in Annex X that were deployed before 2 August 2027 must comply by 31 December 2030. Article 50 watermarking obligations under COM(2025) 836 include a specific transitional: systems placed before 2 August 2026 must comply by 2 February 2027.

Does the open-source exemption in Article 2(12) cover my AI model?+

Only partially. Article 2(12) exempts AI systems released under free and open-source licences, but with three critical exceptions: (1) if placed on the market or put into service as a high-risk AI system within Article 6; (2) if placed on the market or put into service as a system falling under Article 5 (prohibited practices); (3) if placed on the market or put into service as a system falling under Article 50 (transparency obligations). Practically: an open-source chatbot deployed publicly must still provide AI disclosure under Article 50(1). An open-source model integrated into a high-risk system must comply with all Chapter III obligations.

Related Compliance Guides

Minimal-Risk AI Systems

No mandatory Chapter III obligations — but Article 50 and Article 4 still apply.

High-Risk AI Classification Checklist

All 8 Annex III domains and the Article 6 tests to determine high-risk status.

Article 5 Prohibited AI Practices

The absolute bans — no grace period, no transitional provision, no exemption.

General-Purpose AI Models (Chapter V)

GPAI classification, systemic risk designation, and the August 2027 transitional.

Deepfakes & Synthetic Content (Article 50)

Article 50(2) machine-marking and Article 50(4) deep fake disclosure — open-source is not exempt.

AI Transparency Obligations (Article 50)

Full Article 50 guide covering chatbot disclosure, emotion recognition, and deep fake marking.

Confirm whether your AI system is in scope

Regumatrix performs Article 2 scope analysis, checks each of the eight exclusions against your system's context, and then determines risk tier and the complete obligation chain. Free for the first three analyses.

Start free analysis