The EU AI Act's definitions determine which obligations apply to you, when they apply, and in what capacity. This glossary covers the 19 most important terms from Art 3 with plain-English notes on how each definition works in practice.
Start with "Your role." Once you know whether you are a provider, deployer, or both, you can identify which obligation tracks apply. Then check the market access terms to understand when those obligations are triggered. The obligation trigger concepts and system type definitions flesh out the scope of specific rules.
Not sure which role applies to your company? Regumatrix maps your role and full obligation set in under a minute →
Who you are in the EU AI Act supply chain determines which obligations you carry. Most companies occupy more than one role.
"A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
The key markers are: machine-based, operates with autonomy, generates outputs that influence real-world or digital environments. Rule-based automation that follows only fixed if/then logic without inference is not an AI system.
A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.
If you build the AI and put your brand on it — you are the provider and carry the heaviest obligation set under Chapter III.
A natural or legal person, public authority, agency or other body using an AI system under its authority, except where that AI system is used in the course of a personal non-professional activity.
If you use someone else's AI system commercially or professionally — you are the deployer. Deployers face a distinct but lighter obligation set under Arts 26–29.
The provider and the deployer of an AI system — used as an umbrella term covering both roles.
When the EU AI Act refers to 'operator' without qualification, it applies to anyone in the AI supply chain who either provides or deploys the system. Several provisions address 'operators' collectively.
A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to carry out on its behalf the obligations and procedures established by this Regulation.
Required for non-EU providers of high-risk AI systems (Art 22). Acts as the EU regulatory contact point — does not absorb the provider's liability.
A natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established outside the Union.
The importer becomes the point of contact for EU market surveillance when the original provider is outside the EU. Specific obligations apply under Art 23.
A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
Distributors face lighter obligations under Art 24 — mainly due diligence and documentation checks before making a system available. They do not assume provider obligations unless they rebrand or substantially modify the system.
A provider of an AI system, including a general-purpose AI system, that integrates an AI model, regardless of whether the AI model is provided by themselves or by another entity based upon contractual relations.
If you build an application on top of a GPAI model, you are a downstream provider. Under Art 25, you may inherit full provider obligations if you substantially modify the upstream model or change its intended purpose.
These three terms define when EU AI Act obligations are triggered. The distinction between them matters for timing compliance requirements.
The first making available of an AI system or a general-purpose AI model on the Union market.
This is the moment that triggers pre-market provider obligations — conformity assessment must be complete before this point. 'Making available' to third parties (for distribution or use) is what counts, not internally using your own AI.
Any supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.
Free-of-charge AI systems that are supplied commercially — as part of a product, service bundle, or freemium model — are covered. Personal non-commercial use is excluded.
The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
This covers own-use AI — where you build AI and use it yourself without distribution to third parties. A company that builds an internal HR screening tool and uses it on its own employees is 'putting it into service' rather than 'placing on the market'.
These four definitions determine the scope of provider and deployer obligations — and where those obligations begin and end.
The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the instructions for use, promotional or sales materials and statements and in the technical documentation.
What the provider writes in their documentation, not how users happen to use the system. Providers should write intended purpose descriptions carefully — they define the scope of obligatory conformity assessment and risk management.
The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems.
Art 9 requires the risk management system to cover reasonably foreseeable misuse — a broader scope than just intended purpose. Providers cannot disclaim liability for predictable uses outside specification.
An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.
Foundation models such as large language models, vision models, and multimodal models meet this definition. Chapter V (Arts 51–56) applies to GPAI providers — separate from and in addition to the general AI system rules.
A risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, fundamental rights or society as a whole, that can be propagated at scale across the value chain.
Only GPAI models trained on more than 10^25 FLOPs are presumed to have high impact and systemic risk under Art 51(2) — currently this means the frontier models. Art 55 places additional obligations on GPAI providers with systemic risk.
These definitions determine whether specific prohibitions, high-risk classifications, or transparency obligations apply to your system.
An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.
Triggers the Art 5(1)(f) prohibition in workplace/school contexts and the Art 50(3) transparency obligation everywhere. See the Emotion Recognition AI guide for the full breakdown.
An AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, such as sex, age, hair colour, eye colour, tattoos, behavioural or personality traits, language, religion, membership of a national minority, genetic or health status or sexual orientation.
Categorisation on sensitive attributes (race, political opinions, religion, sexual orientation) is high-risk under Annex III point 1(c). Biometric categorisation systems used for authentication are not covered by the high-risk classification.
An AI system for the purpose of identifying natural persons at a distance through the comparison of a person's biometric data with the biometric data contained in a reference database, and without prior knowledge of the user of the AI system whether the person will be present and can be identified.
Real-time RBIS in public spaces by law enforcement is prohibited under Art 5(1)(h) with narrow exceptions. Post-remote RBIS by law enforcement is high-risk under Annex III. Commercial real-time facial recognition 'at a distance' falls within this definition.
An incident or malfunction of an AI system that directly or indirectly leads to death or serious harm to the health of a person, serious damage to property or the environment, a serious or irreversible disruption of the provision of essential services, or an infringement of fundamental rights.
High-risk AI system providers must report serious incidents to national market surveillance authorities within 15 days under Art 73. Post-market monitoring systems (Art 72) are designed to detect serious incidents before they escalate.
COM(2025) 836 (the “Digital Omnibus” Regulation) proposes adding two new definitions to Art 3 to align penalty treatment with actual company size:
Art 3(14a) — SME (small and medium-sized enterprise)
References Commission Recommendation 2003/361/EC — fewer than 250 employees and annual turnover ≤ €50M or balance sheet ≤ €43M. Penalty cap under Art 99(6): lower of the two-figure alternative applies (not the higher).
Art 3(14b) — SMC (small mid-cap enterprise)
New category not yet in EU law: fewer than 500 employees but not qualifying as an SME. References proposed Commission Recommendation (EU) 2025/1099. If enacted, SMCs would benefit from lower-of penalty treatment similar to SMEs.
These definitions are not yet in force. Monitor COM(2025) 836 legislative progress for enactment updates.
Once you know your role — provider, deployer, or both — Regumatrix maps the full EU AI Act obligation set against your specific AI system and returns a compliance action plan.
Get your obligation mapA provider (Article 3(3)) is the entity that develops an AI system and places it on the market or puts it into service under its own name or trademark — broadly, whoever builds and markets the AI. A deployer (Article 3(4)) is any entity that uses an AI system under its authority in a professional or commercial context. Providers carry the heaviest obligation set: risk management, conformity assessment, technical documentation, CE marking, post-market monitoring. Deployers have a lighter set: fundamental rights impact assessments, transparency to users, human oversight, and incident reporting. Both sets are distinct and independent — deployers cannot rely on provider compliance to discharge their own obligations.
Yes, and commonly is. If a company develops an AI system and also uses it commercially — for example, a bank that builds its own credit scoring model and uses it on its own customers — it is both the provider and the deployer. It must comply with both sets of obligations simultaneously. The EU AI Act permits this and does not reduce obligations where the two roles overlap in a single entity.
Intended purpose (Article 3(12)) is what the provider specifies in their instructions for use, technical documentation, and marketing materials — not what users happen to do with the system in practice. However, intended purpose is not the full boundary of provider obligations: Article 9 requires the risk management system to additionally cover 'reasonably foreseeable misuse' — predictable uses outside the specified purpose. Providers who write very narrow intended purpose statements to avoid high-risk classification face the risk that regulators apply reasonably foreseeable misuse analysis to expand the effective scope.
Yes — the EU AI Act uses 'operator' as an umbrella term covering both providers and deployers (Article 3(8)). When provisions refer to 'operators' without specifying which role, they apply to anyone in either role. This matters for provisions like transparency obligations, fundamental rights, and emergency powers — where both providers and deployers are addressed collectively. When specific obligations differ between providers and deployers, the Act uses the specific term.
EU AI Act Overview
Full regulatory structure — risk tiers, obligation tracks, chapter-by-chapter summary
High-Risk AI Checklist
Classify your system against all 8 Annex III domains with plain examples
Prohibited AI Practices — Article 5
All 8 banned uses — who they apply to and what the penalties are
AI Provider Obligations
Full Arts 9–21 provider stack — risk management, conformity, CE marking
AI Deployer Obligations
Arts 26–29 deployer requirements — oversight, transparency, incident reporting
EU AI Act Timeline
All compliance deadlines from February 2025 to August 2027
Understanding the terms is the first step. Regumatrix applies them to your specific AI system and returns the obligations that actually apply — not a generic checklist.
Get started free