Article 2 of the EU AI Act establishes broad extraterritorial reach: any provider placing an AI system on the EU market is in scope, regardless of where it is established. But Article 2 also contains eight specific exclusions — military use, scientific R&D, open-source licences, and personal non-professional activity among them. Before applying the obligation chain, determine precisely whether your organisation and system are within scope.
Determine whether your AI system is in scope — 30 seconds
Regumatrix walks through Article 2 scope assessment, checks all exclusion conditions, verifies transitional provision applicability, and then classifies the risk tier and obligation chain.
Check my AI system — 3 free analysesArticle 2(1) defines seven categories of actors who are subject to the Regulation. All seven are in scope; the question is which specific obligations apply to each role.
Art 2(1)(a) — Providers
Providers placing AI systems on the EU market or putting them into service, OR providers of general-purpose AI models in the EU — regardless of whether they are established in the EU or in a third country.
Art 2(1)(b) — Deployers
Deployers with their place of establishment or location in the EU. Unlike providers, this is establishment-based — a US-based company deploying an AI system for EU customers from US servers is a provider but may not be a deployer under this limb.
Art 2(1)(c) — Third-country providers and deployers
Providers AND deployers established in third countries where the AI system's output is used in the EU. This is the extraterritoriality limb: if an AI system processes data and produces outputs consumed in the EU, the provider and deployer are in scope.
Art 2(1)(d) — Importers and distributors
Importers and distributors of AI systems — these are supply chain entities that bring non-EU AI systems into the EU market or make non-EU systems available in the EU.
Art 2(1)(e) — Product manufacturers
Product manufacturers who place an AI system on the market or put it into service under their own name or trademark as part of their product.
Art 2(1)(f) — Authorised representatives
Authorised representatives of non-EU providers — legal entities established in the EU appointed to act on behalf of non-EU providers, giving EU regulators a point of contact.
Art 2(1)(g) — Affected persons in EU
Affected persons located in the EU. This limb establishes that persons who are subject to the output or decision of an AI system located in the EU are within the protective scope of the Regulation.
AI systems that are safety components of products regulated under the Union harmonisation legislation listed in Annex I (machinery, medical devices, lifting equipment, gas appliances, etc.) have a reduced scope of the AI Act obligations. Only Article 6(1), Arts 102–109 (conformity assessment and notified body rules), and Article 112 apply. All other AI Act obligations — including the broader Chapter III risk management requirements — do not apply independently of those Annex I product safety regulations.
Each exclusion has specific conditions. None are blanket exemptions — misapplying them when the conditions are not met means operating outside the law.
Art 2(3) — Military / defence / national security
AbsoluteAI systems placed on the market, put into service, or used exclusively for military, defence or national security purposes are entirely excluded — regardless of whether this is carried out by a public or private entity. There is no condition of proportionality or necessity here; the exclusion is unconditional.
Art 2(4) — Third-country public authorities and international organisations
With conditionsThird-country public authorities or international organisations acting in the framework of international agreements with the EU on law enforcement and judicial cooperation are excluded — provided adequate safeguards for the protection of fundamental rights and freedoms of individuals are provided for.
Art 2(5) — DSA intermediary services
ClarificationProviders of intermediary services under Chapter II of the Digital Services Act are not affected — the AI Act does not add to or derogate from their DSA obligations. This is a clarification, not an exclusion from the AI Act entirely.
Art 2(6) — Scientific R&D
Purpose-limitedAI systems specifically developed and put into service solely for scientific research and development are excluded. If the same system is later placed on the market or used for non-R&D purposes, the exclusion no longer applies.
Art 2(7) — GDPR and confidentiality law unaffected
ClarificationThe AI Act does not override GDPR, the e-Privacy Directive, or other Union data protection rules, nor does it affect obligations of professional secrecy (e.g., legal privilege). Both the AI Act and data protection law apply concurrently.
Art 2(8) — Pre-market R&D testing
Pre-market onlyResearch, testing or development of AI systems or models prior to being placed on the market or put into service is excluded. CRITICAL: testing in real-world conditions is expressly NOT excluded — it is a separate activity governed by Arts 57–63 (regulatory sandboxes and real-world testing rules).
Art 2(9) — Consumer protection and product safety law
ClarificationThe AI Act does not affect consumer protection and product safety legislation. Both apply concurrently — an AI system that constitutes a defective product for product liability purposes remains so regardless of AI Act compliance.
Art 2(10) — Personal non-professional activity
Deployers only; personal onlyNatural persons deploying AI systems in the course of a purely personal, non-professional activity are excluded as deployers. This is a deployer-only exclusion: providers placing those systems on the market are still fully in scope.
Art 2(11) — Worker protection
More favourable rules allowedMember States may maintain or introduce more favourable worker protection rules than those in this Regulation. This is a floor, not a ceiling — national employment law can be stricter.
Art 2(12) — Open-source AI
Three carve-outs applyAI systems released under free and open-source licences are excluded — UNLESS: (a) placed on the market or put into service as high-risk (Art 6); (b) placed on the market or put into service as falling under Art 5 (prohibited practices); or (c) placed on the market or put into service as falling under Art 50 (transparency obligations, including chatbot disclosure and deep fake marking). Open-source models used in a chatbot or high-risk product lose this exclusion.
Before applying scope or exclusion analysis, confirm that the technology in question actually constitutes an "AI system" within the meaning of Article 3(1).
Official Article 3(1) definition:
"A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
Key elements of the definition
What is NOT an AI system under Article 3(1)
Traditional rules-based software, lookup tables, pure automation workflows, simple calculators, and any system that outputs a pre-defined result via explicit logic (rather than inference) do not qualify. The definition targets systems that learn, reason about, or predict — not systems that execute deterministic code.
Article 111 provides grace periods for AI systems that were already on the market before key dates. These provisions only apply if the system has not undergone significant design changes.
Art 111(1) — Large-scale IT systems (Annex X)
31 December 2030(~5 years away)AI systems that are components of large-scale IT systems listed in Annex X (including SIS II, Eurodac, EES, ETIAS, VIS, ECRIS-TCN) that were placed on the market or put into service before 2 August 2027 must comply by 31 December 2030.
Art 111(2) — High-risk AI (existing deployments)
2 August 2030(~4 years away)High-risk AI systems placed on the market or put into service before 2 August 2026 are not subject to the Act — unless they undergo a significant change in design after that date. Public authority deployers of such systems must comply by 2 August 2030, regardless of whether the design changed.
Art 111(3) — GPAI models (pre-market)
2 August 2027(17 months away)General-purpose AI models placed on the market before 2 August 2025 must comply with the GPAI obligations in Chapter V by 2 August 2027.
No changes are proposed under COM(2025) 836 or COM(2025) 837 for this topic. Note: COM(2025) 836 Art 1(30) does add a transitional provision specifically for Article 50(2) watermarking — see the Deepfakes & Synthetic Content guide for that detail.
Yes. Article 2(1)(a) explicitly covers providers placing AI systems or general-purpose AI models on the EU market regardless of where they are established. Article 2(1)(c) covers providers and deployers established in third countries where the AI system's output is used in the EU. The Act has strong extraterritorial reach — if your system's output is used by EU-based persons, you are likely in scope as a provider or deployer.
Article 2(3)-(12) sets out eight exclusions: (1) military, defence and national security — completely excluded even if carried out by private entities; (2) scientific R&D purpose systems; (3) pre-market R&D and testing — but real-world testing is NOT excluded; (4) third-country public authorities with adequate fundamental rights safeguards; (5) purely personal non-professional activity deployers; (6) Regulation does not override GDPR or confidentiality law; (7) consumer protection and product safety law is not affected; (8) free and open-source AI — excluded UNLESS the system is placed on the market as high-risk or falls under Article 5 or Article 50. None of these exemptions remove Article 50 transparency obligations if the system is placed on the market or into service in a covered context.
Article 3(1) defines an AI system as 'a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' This definition is deliberately broad. Traditional rules-based software, lookup tables, and pure automation without inference do not constitute AI systems. The key test is whether the system infers outputs from inputs rather than following explicit, pre-defined rules.
It depends on the category of your system. High-risk AI systems placed on the market before 2 August 2026 benefit from a grace period — they only need to comply if the system undergoes a significant change in design after that date. However, public authority deployers of such systems must comply by 2 August 2030 regardless. GPAI models placed on the market before 2 August 2025 must comply by 2 August 2027. Large-scale IT systems listed in Annex X that were deployed before 2 August 2027 must comply by 31 December 2030. Article 50 watermarking obligations under COM(2025) 836 include a specific transitional: systems placed before 2 August 2026 must comply by 2 February 2027.
Only partially. Article 2(12) exempts AI systems released under free and open-source licences, but with three critical exceptions: (1) if placed on the market or put into service as a high-risk AI system within Article 6; (2) if placed on the market or put into service as a system falling under Article 5 (prohibited practices); (3) if placed on the market or put into service as a system falling under Article 50 (transparency obligations). Practically: an open-source chatbot deployed publicly must still provide AI disclosure under Article 50(1). An open-source model integrated into a high-risk system must comply with all Chapter III obligations.
Minimal-Risk AI Systems
No mandatory Chapter III obligations — but Article 50 and Article 4 still apply.
High-Risk AI Classification Checklist
All 8 Annex III domains and the Article 6 tests to determine high-risk status.
Article 5 Prohibited AI Practices
The absolute bans — no grace period, no transitional provision, no exemption.
General-Purpose AI Models (Chapter V)
GPAI classification, systemic risk designation, and the August 2027 transitional.
Deepfakes & Synthetic Content (Article 50)
Article 50(2) machine-marking and Article 50(4) deep fake disclosure — open-source is not exempt.
AI Transparency Obligations (Article 50)
Full Article 50 guide covering chatbot disclosure, emotion recognition, and deep fake marking.
Regumatrix performs Article 2 scope analysis, checks each of the eight exclusions against your system's context, and then determines risk tier and the complete obligation chain. Free for the first three analyses.
Start free analysis