RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceAgentic AI
No explicit definition yetCOM(2025) 836 PROPOSAL

Agentic AI Systems Under the EU AI Act: Classification and Obligations

The EU AI Act contains no category called “agentic AI” — but that does not mean agentic systems are unregulated. Obligations flow from what your agent does, not from what you call it.

“Agentic AI” is not defined in the EU AI Act

Regulation (EU) 2024/1689 has no definition of “agentic AI”, “AI agent”, or “autonomous AI system” as a separate category. However, Art 3(1) defines an AI system as operating with “varying levels of autonomy” and able to “exhibit adaptiveness after deployment.” This language directly captures agentic behaviour — autonomous goal pursuit, self-directed tool use, and adaptive multi-step action sequences. Your system is classified and regulated based on what it outputs and what functions it performs, not on whether it is labelled as agentic.

Not sure which obligation track applies to your agentic system? Regumatrix classifies your AI across all four tracks automatically →

Four obligation tracks — which apply to your agent?

Multiple tracks can apply simultaneously. Assess your agentic system against all four before concluding which obligations are in scope.

ProhibitedArt 5

Track 1 — Prohibited outputs

If your agentic AI can produce outputs that fall under Article 5 prohibited practices — for example, generating social scoring decisions, performing emotion recognition in prohibited contexts, or producing real-time biometric identification in public spaces — the prohibition applies to the system as a whole. It does not matter that the agent is 'following instructions'; the provider and deployer are responsible for outputs.

When it applies: When the agent can produce or trigger outputs matching any of the 8 Article 5 prohibited practices.
Consequence: Absolute prohibition — no exceptions, no derogation. Penalty: Art 99(3) €35M/7%.
High-riskArts 6 + Annex III

Track 2 — High-risk functions

If your agentic AI performs any function listed in Annex III — making or contributing to decisions about employment, credit, education, essential services, law enforcement, migration, or critical infrastructure — it is a high-risk AI system regardless of its agentic architecture. The full Chapter III obligation stack applies: risk management (Art 9), data governance (Art 10), technical documentation (Art 11), human oversight (Art 14), conformity assessment (Art 43), and post-market monitoring (Art 72).

When it applies: When the agent's overall function maps to one of Annex III's 8 high-risk domains.
Consequence: Full Chapter III obligations. Conformity assessment before market entry. CE marking. EU database registration.
TransparencyArt 50

Track 3 — Transparency obligations

Article 50 imposes transparency obligations on AI systems that interact conversationally with natural persons (Art 50(1)–(2)), generate synthetic content (Art 50(4)), or operate as emotion recognition or biometric categorisation systems (Art 50(3)). For agentic systems that interact directly with users, providers must ensure the system discloses it is an AI at the first interaction. Deployers who use agentic AI in customer-facing roles are responsible for ensuring disclosure occurs.

When it applies: When the agent interacts with natural persons conversationally, generates synthetic media, or involves emotion/biometric processing.
Consequence: Disclosure requirement at first interaction. Penalty for violation: Art 99(4) €15M/3%.
GPAIArts 53–55

Track 4 — GPAI foundation model obligations

If your agentic system is built on a general-purpose AI model (Art 3(63)) — such as a large language model — the model provider separately carries GPAI obligations under Arts 53–55. As the agentic application developer, you are a downstream provider (Art 3(68)) and must comply with applicable AI system rules. Under Article 25, if you substantially modify the GPAI model's intended purpose or put your own trademark on the resulting system, you assume full provider status for the integrated system.

When it applies: When the agentic system is built on or integrates a GPAI foundation model.
Consequence: Model provider obligations under Arts 53–55 apply to upstream model. Downstream application provisions and Art 25 triggers apply to the agentic system provider.

GPAI models in agentic pipelines — the Art 25 boundary

When you are a downstream provider — Art 3(68)

If your agentic system integrates a GPAI model provided by a third party, you are a downstream provider under Art 3(68). You must comply with all EU AI Act obligations applicable to your agentic application — but you benefit from the upstream GPAI provider's model card, capability documentation, and usage policy, which inform your own conformity assessment.

When you become the full provider — Art 25

Art 25(1) triggers full provider status if you: (a) put your own trademark on a high-risk AI system built on a GPAI model, (b) make a substantial modification to a high-risk system that keeps it high-risk, or (c) change the intended purpose of an AI system so that it becomes high-risk. At that point, the upstream model provider's conformity marking no longer protects you — you must conduct your own conformity assessment.

PROPOSAL — COM(2025) 836Not yet enacted law

Agentic AI gets its own notified body code under 836

COM(2025) 836 Art 1 point 33 proposes adding Annex XIV to the EU AI Act, establishing codes for notified body areas of designation. The proposed Annex XIV AIH codes include:

AIH 0401 — Agentic AI systems

This code designates agentic AI as a distinct area for notified body accreditation under the EU AI Act. If enacted, notified bodies would need the AIH 0401 designation to conduct third-party conformity assessments involving agentic AI systems.

This is a proposal only. The inclusion of AIH 0401 in Annex XIV would signal EU policy intent to treat agentic AI as requiring specialist notified body expertise — but current obligations are unchanged until 836 is enacted.

COM(2025) 837 (the “Digital Omnibus Directive”) does not contain provisions specifically changing obligations for agentic AI systems.

Frequently asked questions

Is 'agentic AI' defined in the EU AI Act?

Not in the current regulation. Regulation (EU) 2024/1689 contains no definition of 'agentic AI', 'AI agent', or 'autonomous AI system' as a distinct category. However, Article 3(1) defines an 'AI system' as one that operates 'with varying levels of autonomy' and 'may exhibit adaptiveness after deployment' — language that directly captures agentic behaviour: autonomous goal pursuit, self-directed tool use, and adaptive action sequences. COM(2025) 836 proposes Annex XIV code AIH 0401 for agentic AI in the notified body designation framework, but this proposal has not yet been enacted.

If my agentic AI uses a GPAI model, who is responsible — me or the model provider?

Both parties carry separate obligations. The GPAI model provider is subject to Articles 53–55 obligations (transparency, copyright policy, model evaluation, incident reporting). As the developer of the agentic application built on top of the model, you are the downstream provider (Article 3(68)) and carry all applicable AI system obligations — including prohibited practice checks (Art 5), high-risk classification (Art 6 + Annex III), and transparency (Art 50). Under Article 25, if you substantially modify the GPAI model's intended purpose or put your own trademark on it, you become the full provider with all provider obligations for that model.

Does my AI agent need to pass a conformity assessment?

If your agentic system performs a function listed in Annex III or Article 6(1) — for example, making decisions about employment, credit, education, or law enforcement — then yes: conformity assessment under Article 43 is required before placing the system on the market. Most Annex III systems can use internal control (Annex VI). Certain categories — biometric identification systems and high-risk AI in regulated product sectors — require a notified body under Annex VII. If your agentic system does not perform an Annex III function and does not embed in a regulated product, no conformity assessment is required — but Article 50 transparency obligations may still apply.

An AI agent takes autonomous actions across many steps — is each step evaluated separately?

No. The EU AI Act assesses the AI system as a whole — not individual inference steps or tool calls. What matters for classification is the overall function the system performs and the outputs it produces. A multi-step agentic pipeline that results in employment decisions is a high-risk AI system regardless of whether the individual steps could be described as lower-risk in isolation. Where an agent can produce multiple types of outputs (some innocuous, some high-risk), the classification follows the highest-risk function performed.

My agentic AI interacts with users conversationally — does Art 50 apply?

Yes, if users could reasonably believe they are conversing with a human. Article 50(2) requires providers of AI systems intended to interact directly with natural persons to ensure those systems are designed and developed to inform persons they are communicating with an AI — at the time of first interaction, in a clear and distinguishable manner. This applies to agentic chatbots, voice agents, and AI-powered customer service personas regardless of whether the underlying model is GPAI or custom-trained. The right not to know does not apply in the same way as under GDPR — the disclosure obligation exists regardless of user preference.

Watch for these red flags in your agentic system

  • The agent makes or substantially contributes to decisions about employment, credit, or essential services — likely Annex III high-risk
  • The agent can access and act on data about individuals — may trigger fundamental rights assessment (Art 27)
  • The agent interacts conversationally with end users — Art 50(1)/(2) transparency disclosure required
  • The agent is built on a GPAI model you substantially modify or rebrand — Art 25 full provider status applies
  • The agent can trigger actions that could constitute a prohibited practice under Art 5 (social scoring, emotion recognition in banned contexts, etc.)

Related guides

General-Purpose AI Models (GPAI)

Arts 53–55 — GPAI provider obligations, transparency, copyright policy

GPAI Systemic Risk (Art 55)

Additional obligations for GPAI models with systemic risk (10^25 FLOPs threshold)

AI Provider Obligations

Full Chapter III stack — Arts 9–21, conformity assessment, CE marking

AI Transparency Obligations (Art 50)

Chatbot disclosure, synthetic content, Art 50(2) conversational AI requirements

Conformity Assessment

Internal control (Annex VI) vs notified body (Annex VII) — which track applies

High-Risk AI Checklist

Classify your system function against all 8 Annex III domains

Know exactly which tracks apply to your agentic AI

Regumatrix evaluates your system across all four obligation tracks — prohibited, high-risk, transparency, and GPAI — and returns a clear compliance action plan with the exact requirements that apply.

Get started free