If your AI system appears in Annex III, it is presumed high-risk. But Article 6(3) provides one narrow exit: four conditions, any one of which is enough — unless your system profiles people. Get it right and you avoid the full high-risk compliance regime. Get it wrong and regulators treat every missed obligation as a separate infringement.
Providers of high-risk AI systems must comply with eight categories of obligations under Art 16: risk management, technical documentation, record-keeping, logging, conformity assessment, EU declaration of conformity, CE marking, and EU database registration.
If you claim the Art 6(3) derogation and regulators overrule it, every one of those obligations you failed to meet is an infringement. Under Art 99(4), non-compliance with provider obligations carries fines up to €15,000,000 or 3% of global turnover — the lower figure applies for SMEs.
Not sure whether your system qualifies for the derogation?
Regumatrix checks your system description against the full Article 6 test — including the four conditions, the profiling override, and your Annex III category — and returns your classification, every applicable obligation, and your fine exposure under Article 99.
Check my system — 3 free analysesArticle 6(3) is a derogation from Art 6(2) only. It applies to AI systems that appear in Annex III — the list of high-risk use-case categories such as credit scoring, CV screening, biometric identification, exam proctoring, and recidivism assessment.
CAN use derogation
Annex III AI systems — Art 6(2)
Systems in domains like employment, education, essential services, law enforcement, and administration of justice — if one of the four conditions below is met and the system does not profile people.
CANNOT use derogation
Annex I product safety AI — Art 6(1)
AI safety components in medical devices, vehicles, aviation, and other regulated products are always high-risk under Art 6(1). No derogation exists for these systems.
Under Art 6(3), an Annex III AI system is not high-risk where it does not pose a significant risk of harm to health, safety, or fundamental rights — including by not materially influencing the outcome of decision-making. The first subparagraph applies where any one of the following conditions is fulfilled:
Narrow procedural task
The system performs a narrow procedural task only. A tool that automatically formats job applications into a standard layout — with no judgment about the applicant's suitability — could qualify. The word "narrow" is load-bearing: a system with broad operational discretion will not meet this condition.
Improves a previously completed human activity
The system improves the result of a task a human has already completed. A grammar and clarity checker applied to a doctor's completed clinical notes can qualify — the physician made the substantive judgment; the AI only polishes the output. The human decision must genuinely precede the AI's involvement.
Detects decision-making patterns — does not replace the human
The system detects patterns or deviations from prior decision-making and is not meant to replace or influence a previously completed human assessment — without proper human review. An internal audit tool that flags unusual loan decisions for human examination can qualify, provided it does not override or re-open the original decision. Proper human review is a prerequisite, not an afterthought.
Preparatory task to an Annex III assessment
The system performs only a preparatory task to an assessment relevant to an Annex III use case. A data aggregation tool that gathers financial information for a credit officer to review before making a loan decision can qualify. The key test: the AI prepares, a human decides.
The final sentence of Art 6(3) reads: "an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons."
Profiling means automated processing of personal data used to evaluate aspects of a person: performance at work, economic situation, health, reliability, behaviour, location, or movements. If your system does this — even as a secondary function — the four conditions above cannot override the classification. The system is high-risk.
A sentiment analysis tool that scores individual candidates based on their word choices in a recruitment video is profiling. A CV screening tool that ranks applicants by likelihood of success is profiling. A financial tool that builds individual risk scores from transaction histories is profiling.
Under Art 6(4), a provider who concludes their system is not high-risk must document that assessment before placing the system on the market or putting it into service. National competent authorities can request this documentation at any time, and you must provide it.
Your documentation should include:
Under current law, Art 49(2) also requires you to register yourself and the system in the EU AI database before market placement. The 836 proposal (below) would delete this requirement entirely.
Article 6(5) required the Commission — after consulting the AI Board — to publish practical guidelines on how to apply Article 6, including a comprehensive list of examples of high-risk and not-high-risk use cases, by 2 February 2026. These guidelines are now available and provide concrete comparisons you can use when reasoning through your own system's classification.
If COM(2025) 836 is enacted, two changes directly affect the derogation process:
If any of these apply, your Art 6(3) argument is fragile. Regumatrix runs your system through the full Article 6 classification logic and tells you where you land — including every obligation that applies if you are high-risk.
Check my classification — 3 free analysesNo. Article 6(1) systems — AI used as safety components in medical devices, vehicles, aviation — are always high-risk. The derogation in Article 6(3) is only for Article 6(2) systems that appear in Annex III.
Profiling means automated processing of personal data used to evaluate aspects of a natural person — performance at work, economic situation, health, reliability, behaviour, location, or movements. If your system builds a picture of an individual from data, it is likely profiling. Annex III systems that do this are always high-risk, regardless of the other four conditions.
If national competent authorities conclude your Article 6(3) assessment is wrong, your system is treated as high-risk from the start. Every high-risk obligation you failed to fulfil — risk management system, technical documentation, conformity assessment, human oversight controls, registration — becomes a separate infringement. Non-compliance with provider obligations under Article 16 carries fines up to €15 million or 3% of global turnover under Article 99(4), whichever is lower for SMEs.
Under current law, yes. Article 49(2) requires providers who conclude their system is not high-risk under Article 6(3) to register themselves and the system in the EU AI database before market placement. If COM(2025) 836 is enacted, Article 49(2) is deleted entirely and this registration requirement disappears.
Yes. Article 6(6) empowers the Commission to add new derogation conditions through delegated acts if evidence shows that certain Annex III AI systems do not pose significant risks. Article 6(7) allows the Commission to delete conditions where evidence shows the derogation undermines health, safety, or fundamental rights protection.
Regumatrix checks your system against all 113 articles of the EU AI Act — including the Article 6(3) four-condition test and profiling override — and returns your risk tier, Annex classification, every obligation that applies, and fine exposure under Article 99. 8-section cited report, ~30 seconds, no credit card required.
Start free — 3 analyses included