The vast majority of commercial AI systems are minimal-risk under the EU AI Act. Spam filters, recommender engines, customer service chatbots, inventory optimisation tools — none of these trigger the Chapter III obligation chain. But "minimal risk" is not the same as "no obligations." Article 50 transparency rules and Article 95 voluntary codes apply regardless of tier. This guide explains exactly what you must do — and what you do not.
Not sure which tier your AI system falls into?
Regumatrix classifies your AI system against all Annex III domains, checks for Article 5 prohibited practices, and confirms whether Article 50 applies — returning your exact risk tier and obligation list in about 30 seconds.
Check my AI system — 3 free analysesEvery AI system placed on the market or put into service in the EU falls into exactly one of four categories. Classification determines the complete set of legal obligations that apply.
Article 5
Absolutely banned. Social scoring, real-time biometric identification in public spaces (with narrow exceptions), cognitive behavioural manipulation, facial scraping for databases. Penalty: €35M/7%.
Articles 6–49
Annex I product safety AI and Annex III domain AI (HR, credit, education, critical infrastructure, law enforcement, biometrics, immigration, administration). Full Chapter III obligation chain: risk management, data governance, technical documentation, human oversight, conformity assessment, CE marking, registration.
Article 50
Specific transparency duties regardless of any other classification: chatbot disclosure, deep fake marking, emotion recognition disclosure, synthetic content machine-marking. These apply to ANY AI system that falls into these use cases.
Article 95
Everything else. Spam filters, recommenders, inventory tools, content optimisation, image enhancement. No mandatory obligations under Chapter III. Voluntary codes of conduct available under Article 95.
Being classified as minimal-risk removes the Chapter III obligation chain. It does not create a complete exemption from the AI Act. Three sets of obligations remain relevant regardless of risk tier.
Any AI system designed to interact directly with natural persons must disclose that it is an AI — unless this is obvious from context. This applies to every chatbot, virtual assistant, and conversational AI regardless of whether it is also high-risk. The disclosure must be provided at the latest at the time of the first interaction. Exception: systems authorised by law to detect or prevent criminal offences.
Deployers of AI systems that generate deep fakes — AI-generated or manipulated image, audio or video content that resembles real persons, places or events — must disclose the artificial origin. Applies to all deployers regardless of risk tier. Exception for law enforcement, and limited exception for evidently artistic/satirical works (must still disclose existence of synthetic content, but need not hamper the work's display). Text publishing to inform the public on matters of public interest also requires disclosure.
All providers and deployers of AI systems — including minimal-risk systems — must take measures to ensure a sufficient level of AI literacy among staff and others who operate or use the systems. The required level scales with technical knowledge, experience, education, training, and the deployment context.
Some AI systems are not minimal-risk — they are outside the scope of the AI Act entirely, under Article 2. These systems face no AI Act obligations at all, including no Article 50 transparency requirements (unless placing on market or in service in a non-excluded context).
Military, defence, national security (Art 2(3))
AI systems placed on the market, put into service, or used exclusively for military, defence or national security purposes — regardless of whether carried out by public or private entities. This exclusion is absolute and unconditional.
Pure scientific research and development (Art 2(6))
AI systems specifically developed and put into service solely for scientific research and development. Note: testing in real-world conditions is NOT excluded — that is a distinct activity covered by Article 57/60.
Pre-market R&D and testing (Art 2(8))
Research, testing or development activities regarding AI systems or models prior to being placed on the market or put into service. Testing in real-world conditions is explicitly excluded from this exclusion (i.e., real-world testing IS covered by the Act).
Purely personal non-professional use (Art 2(10))
Deployers who are natural persons using AI systems in the course of a purely personal non-professional activity. This is a deployer-only exclusion — providers placing those systems on the market are still subject to the Act.
Open-source systems (Art 2(12)) — with conditions
AI systems released under free and open-source licences are excluded — UNLESS they are placed on the market or put into service as high-risk AI systems or as systems falling under Article 5 (prohibited) or Article 50 (transparency). An open-source chatbot deployed publicly must still comply with Article 50(1).
No obligations does not mean no pathway to signal responsible AI development. Article 95 establishes a framework for voluntary codes of conduct that can benefit minimal-risk AI providers and deployers.
What codes can cover (Art 95(2))
Voluntary application of Chapter III, Section 2 requirements (risk management, data governance, technical documentation, etc.), EU ethical guidelines for trustworthy AI, environmental sustainability including energy-efficient AI development, AI literacy among staff, inclusive and diverse AI design, and prevention of negative impact on vulnerable persons.
Who can draw up codes
Individual providers or deployers, organisations representing them, or combinations — including civil society and academia. Codes may cover one or more AI systems of similar intended purpose (Art 95(3)).
SME consideration (Art 95(4))
The AI Office and Member States must take SME and startup interests into account when facilitating code development. This means codes should not impose disproportionate burdens on smaller organisations.
Strategic value of participation
Participation in approved codes of conduct signals responsible AI practices to regulators, customers and partners. Article 57(1)(e) specifically cites voluntary application of Art 95 codes as a learning outcome facilitated by AI regulatory sandboxes — making code participation a natural complement to sandbox access.
No changes are proposed under COM(2025) 836 or COM(2025) 837 for this topic.
Minimal-risk AI systems are AI systems that do not fall into any of the three regulated categories: prohibited (Article 5), high-risk (Article 6 + Annex III), or limited-risk (Article 50). The vast majority of AI systems in commercial use — spam filters, customer service chatbots (with disclosure), recommendation engines, inventory optimization tools, image editing software — are minimal-risk. They face no mandatory compliance obligations under Chapter III of the AI Act, and no registration requirements.
Yes. Article 50 transparency obligations apply regardless of whether a system is high-risk or minimal-risk. Article 50(1) requires chatbot disclosure (informing users they are speaking with an AI) for all AI systems intended to interact directly with natural persons. Article 50(4) requires deep fake disclosure for all deployers of AI systems that generate deep fakes — whether or not those systems would otherwise be high-risk. These obligations are not part of Chapter III and therefore apply to minimal-risk AI.
Partially. Article 2(12) exempts AI systems released under free and open-source licences from most of the AI Act — but with critical exceptions. The exemption does NOT apply if the system is placed on the market or put into service as a high-risk AI system (as defined in Article 6), or as a system that falls under Article 5 (prohibited practices) or Article 50 (transparency obligations). If you deploy an open-source AI model as a chatbot interacting with the public, Article 50(1) still applies. If the open-source model forms part of a high-risk AI system you place on the market, Chapter III applies in full.
Article 95 empowers the AI Office and Member States to encourage providers and deployers of any AI system (not just high-risk) to voluntarily adopt codes of conduct covering some or all of the Chapter III, Section 2 requirements. Codes can also cover environmental sustainability, AI literacy, inclusive design, and impact on vulnerable persons. Participation is purely voluntary — but codes may cover topics such as risk management practices, technical robustness, bias mitigation, and transparency. The AI Office has specific requirements to consider SME and startup needs when facilitating code development.
Yes. Classification is based on the system's actual intended purpose and deployment context — not just how you originally categorised it. If you initially deploy a system as minimal-risk but then substantially modify it, deploy it in a high-risk context listed in Annex III, or integrate it into a product where it becomes a safety component, you must re-assess the classification. Article 6(3) allows a narrow self-declaration that an Annex III system is not high-risk — but that declaration has strict conditions and documentation requirements. Misclassification carries the same penalty as non-compliance: up to €15M / 3% under Article 99(4).
Is My AI High-Risk? Full Checklist
All 8 Annex III domains — check whether your system triggers the Chapter III obligation chain.
AI Transparency Obligations (Article 50)
Full Article 50 guide: chatbot disclosure, deep fake marking, synthetic content obligations.
Article 5 Prohibited AI Practices
The 8 banned AI uses under Article 5 — absolute prohibitions with €35M/7% penalty.
What's Out of Scope (Article 2)
Military, R&D, personal use, open-source — the full exclusion list from Article 2.
Article 6(3) Derogation — Can I Opt Out of High-Risk?
The narrow self-declaration route for Annex III systems — strict conditions and documentation.
EU AI Act Regulatory Sandboxes (Arts 57–63)
How sandboxes work, SME priority access, voluntary code participation as a learning outcome.
Regumatrix analyses your AI system against every Annex III domain, all Article 5 prohibited practices, and Article 50 transparency triggers — returning your exact risk tier, which Article 50 obligations apply, and your fine exposure under Article 99. No credit card required.
Start free analysis