The EU AI Act contains no category called “agentic AI” — but that does not mean agentic systems are unregulated. Obligations flow from what your agent does, not from what you call it.
Regulation (EU) 2024/1689 has no definition of “agentic AI”, “AI agent”, or “autonomous AI system” as a separate category. However, Art 3(1) defines an AI system as operating with “varying levels of autonomy” and able to “exhibit adaptiveness after deployment.” This language directly captures agentic behaviour — autonomous goal pursuit, self-directed tool use, and adaptive multi-step action sequences. Your system is classified and regulated based on what it outputs and what functions it performs, not on whether it is labelled as agentic.
Not sure which obligation track applies to your agentic system? Regumatrix classifies your AI across all four tracks automatically →
Multiple tracks can apply simultaneously. Assess your agentic system against all four before concluding which obligations are in scope.
If your agentic AI can produce outputs that fall under Article 5 prohibited practices — for example, generating social scoring decisions, performing emotion recognition in prohibited contexts, or producing real-time biometric identification in public spaces — the prohibition applies to the system as a whole. It does not matter that the agent is 'following instructions'; the provider and deployer are responsible for outputs.
If your agentic AI performs any function listed in Annex III — making or contributing to decisions about employment, credit, education, essential services, law enforcement, migration, or critical infrastructure — it is a high-risk AI system regardless of its agentic architecture. The full Chapter III obligation stack applies: risk management (Art 9), data governance (Art 10), technical documentation (Art 11), human oversight (Art 14), conformity assessment (Art 43), and post-market monitoring (Art 72).
Article 50 imposes transparency obligations on AI systems that interact conversationally with natural persons (Art 50(1)–(2)), generate synthetic content (Art 50(4)), or operate as emotion recognition or biometric categorisation systems (Art 50(3)). For agentic systems that interact directly with users, providers must ensure the system discloses it is an AI at the first interaction. Deployers who use agentic AI in customer-facing roles are responsible for ensuring disclosure occurs.
If your agentic system is built on a general-purpose AI model (Art 3(63)) — such as a large language model — the model provider separately carries GPAI obligations under Arts 53–55. As the agentic application developer, you are a downstream provider (Art 3(68)) and must comply with applicable AI system rules. Under Article 25, if you substantially modify the GPAI model's intended purpose or put your own trademark on the resulting system, you assume full provider status for the integrated system.
If your agentic system integrates a GPAI model provided by a third party, you are a downstream provider under Art 3(68). You must comply with all EU AI Act obligations applicable to your agentic application — but you benefit from the upstream GPAI provider's model card, capability documentation, and usage policy, which inform your own conformity assessment.
Art 25(1) triggers full provider status if you: (a) put your own trademark on a high-risk AI system built on a GPAI model, (b) make a substantial modification to a high-risk system that keeps it high-risk, or (c) change the intended purpose of an AI system so that it becomes high-risk. At that point, the upstream model provider's conformity marking no longer protects you — you must conduct your own conformity assessment.
COM(2025) 836 Art 1 point 33 proposes adding Annex XIV to the EU AI Act, establishing codes for notified body areas of designation. The proposed Annex XIV AIH codes include:
AIH 0401 — Agentic AI systems
This code designates agentic AI as a distinct area for notified body accreditation under the EU AI Act. If enacted, notified bodies would need the AIH 0401 designation to conduct third-party conformity assessments involving agentic AI systems.
This is a proposal only. The inclusion of AIH 0401 in Annex XIV would signal EU policy intent to treat agentic AI as requiring specialist notified body expertise — but current obligations are unchanged until 836 is enacted.
COM(2025) 837 (the “Digital Omnibus Directive”) does not contain provisions specifically changing obligations for agentic AI systems.
Not in the current regulation. Regulation (EU) 2024/1689 contains no definition of 'agentic AI', 'AI agent', or 'autonomous AI system' as a distinct category. However, Article 3(1) defines an 'AI system' as one that operates 'with varying levels of autonomy' and 'may exhibit adaptiveness after deployment' — language that directly captures agentic behaviour: autonomous goal pursuit, self-directed tool use, and adaptive action sequences. COM(2025) 836 proposes Annex XIV code AIH 0401 for agentic AI in the notified body designation framework, but this proposal has not yet been enacted.
Both parties carry separate obligations. The GPAI model provider is subject to Articles 53–55 obligations (transparency, copyright policy, model evaluation, incident reporting). As the developer of the agentic application built on top of the model, you are the downstream provider (Article 3(68)) and carry all applicable AI system obligations — including prohibited practice checks (Art 5), high-risk classification (Art 6 + Annex III), and transparency (Art 50). Under Article 25, if you substantially modify the GPAI model's intended purpose or put your own trademark on it, you become the full provider with all provider obligations for that model.
If your agentic system performs a function listed in Annex III or Article 6(1) — for example, making decisions about employment, credit, education, or law enforcement — then yes: conformity assessment under Article 43 is required before placing the system on the market. Most Annex III systems can use internal control (Annex VI). Certain categories — biometric identification systems and high-risk AI in regulated product sectors — require a notified body under Annex VII. If your agentic system does not perform an Annex III function and does not embed in a regulated product, no conformity assessment is required — but Article 50 transparency obligations may still apply.
No. The EU AI Act assesses the AI system as a whole — not individual inference steps or tool calls. What matters for classification is the overall function the system performs and the outputs it produces. A multi-step agentic pipeline that results in employment decisions is a high-risk AI system regardless of whether the individual steps could be described as lower-risk in isolation. Where an agent can produce multiple types of outputs (some innocuous, some high-risk), the classification follows the highest-risk function performed.
Yes, if users could reasonably believe they are conversing with a human. Article 50(2) requires providers of AI systems intended to interact directly with natural persons to ensure those systems are designed and developed to inform persons they are communicating with an AI — at the time of first interaction, in a clear and distinguishable manner. This applies to agentic chatbots, voice agents, and AI-powered customer service personas regardless of whether the underlying model is GPAI or custom-trained. The right not to know does not apply in the same way as under GDPR — the disclosure obligation exists regardless of user preference.
General-Purpose AI Models (GPAI)
Arts 53–55 — GPAI provider obligations, transparency, copyright policy
GPAI Systemic Risk (Art 55)
Additional obligations for GPAI models with systemic risk (10^25 FLOPs threshold)
AI Provider Obligations
Full Chapter III stack — Arts 9–21, conformity assessment, CE marking
AI Transparency Obligations (Art 50)
Chatbot disclosure, synthetic content, Art 50(2) conversational AI requirements
Conformity Assessment
Internal control (Annex VI) vs notified body (Annex VII) — which track applies
High-Risk AI Checklist
Classify your system function against all 8 Annex III domains
Regumatrix evaluates your system across all four obligation tracks — prohibited, high-risk, transparency, and GPAI — and returns a clear compliance action plan with the exact requirements that apply.
Get started free