RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
  1. Home
  2. /
  3. Compliance
  4. /
  5. Not-High-Risk Derogation
Classification RiskUp to €15 M / 3% if wrong

Can I Claim My AI Is Not High-Risk? (Article 6(3))

If your AI system appears in Annex III, it is presumed high-risk. But Article 6(3) provides one narrow exit: four conditions, any one of which is enough — unless your system profiles people. Get it right and you avoid the full high-risk compliance regime. Get it wrong and regulators treat every missed obligation as a separate infringement.

Why the stakes are high

Providers of high-risk AI systems must comply with eight categories of obligations under Art 16: risk management, technical documentation, record-keeping, logging, conformity assessment, EU declaration of conformity, CE marking, and EU database registration.

If you claim the Art 6(3) derogation and regulators overrule it, every one of those obligations you failed to meet is an infringement. Under Art 99(4), non-compliance with provider obligations carries fines up to €15,000,000 or 3% of global turnover — the lower figure applies for SMEs.

Not sure whether your system qualifies for the derogation?

Regumatrix checks your system description against the full Article 6 test — including the four conditions, the profiling override, and your Annex III category — and returns your classification, every applicable obligation, and your fine exposure under Article 99.

Check my system — 3 free analyses

Which systems can use this derogation?

Article 6(3) is a derogation from Art 6(2) only. It applies to AI systems that appear in Annex III — the list of high-risk use-case categories such as credit scoring, CV screening, biometric identification, exam proctoring, and recidivism assessment.

CAN use derogation

Annex III AI systems — Art 6(2)

Systems in domains like employment, education, essential services, law enforcement, and administration of justice — if one of the four conditions below is met and the system does not profile people.

CANNOT use derogation

Annex I product safety AI — Art 6(1)

AI safety components in medical devices, vehicles, aviation, and other regulated products are always high-risk under Art 6(1). No derogation exists for these systems.

The four derogation conditions

Under Art 6(3), an Annex III AI system is not high-risk where it does not pose a significant risk of harm to health, safety, or fundamental rights — including by not materially influencing the outcome of decision-making. The first subparagraph applies where any one of the following conditions is fulfilled:

a

Narrow procedural task

The system performs a narrow procedural task only. A tool that automatically formats job applications into a standard layout — with no judgment about the applicant's suitability — could qualify. The word "narrow" is load-bearing: a system with broad operational discretion will not meet this condition.

b

Improves a previously completed human activity

The system improves the result of a task a human has already completed. A grammar and clarity checker applied to a doctor's completed clinical notes can qualify — the physician made the substantive judgment; the AI only polishes the output. The human decision must genuinely precede the AI's involvement.

c

Detects decision-making patterns — does not replace the human

The system detects patterns or deviations from prior decision-making and is not meant to replace or influence a previously completed human assessment — without proper human review. An internal audit tool that flags unusual loan decisions for human examination can qualify, provided it does not override or re-open the original decision. Proper human review is a prerequisite, not an afterthought.

d

Preparatory task to an Annex III assessment

The system performs only a preparatory task to an assessment relevant to an Annex III use case. A data aggregation tool that gathers financial information for a credit officer to review before making a loan decision can qualify. The key test: the AI prepares, a human decides.

Hard override: profiling always means high-risk

The final sentence of Art 6(3) reads: "an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons."

Profiling means automated processing of personal data used to evaluate aspects of a person: performance at work, economic situation, health, reliability, behaviour, location, or movements. If your system does this — even as a secondary function — the four conditions above cannot override the classification. The system is high-risk.

A sentiment analysis tool that scores individual candidates based on their word choices in a recruitment video is profiling. A CV screening tool that ranks applicants by likelihood of success is profiling. A financial tool that builds individual risk scores from transaction histories is profiling.

What you must do when you claim the derogation

Under Art 6(4), a provider who concludes their system is not high-risk must document that assessment before placing the system on the market or putting it into service. National competent authorities can request this documentation at any time, and you must provide it.

Your documentation should include:

  • • Which Annex III category the system falls into and why it was initially considered potentially high-risk
  • • Which condition(s) under Art 6(3)(a)–(d) you are relying on, with a concrete description of how your system meets the condition
  • • Why the system does not materially influence the outcome of decision-making affecting natural persons
  • • A specific confirmation that the system does not perform profiling of natural persons
  • • The date of assessment and the identity of the person or team responsible for the determination

Under current law, Art 49(2) also requires you to register yourself and the system in the EU AI database before market placement. The 836 proposal (below) would delete this requirement entirely.

Commission guidelines on Article 6: published February 2026

Article 6(5) required the Commission — after consulting the AI Board — to publish practical guidelines on how to apply Article 6, including a comprehensive list of examples of high-risk and not-high-risk use cases, by 2 February 2026. These guidelines are now available and provide concrete comparisons you can use when reasoning through your own system's classification.

PROPOSAL — not yet enacted lawCOM(2025) 836

836 removes the EU database registration obligation for Art 6(3) systems

If COM(2025) 836 is enacted, two changes directly affect the derogation process:

  • 836 Art 1 pt6Article 6(4) is amended to remove the sentence that required registration under Article 49(2). Under the new text, you document the assessment internally, keep it available, and produce it to national competent authorities on request. No proactive submission anywhere.
  • 836 Art 1 pt14Article 49(2) — the registration obligation itself — is deleted entirely. There will be no EU database entry required for systems relying on the Art 6(3) derogation. The documentation-keeping requirement under Art 6(4) remains.

Warning signals your system is probably still high-risk

  • • Your system scores, ranks, or sorts individuals — not just formats or displays data
  • • Your system's output is the primary input to a consequential decision (loan, job offer, school place, benefit eligibility)
  • • A human reviewing the output would find it difficult to disagree without a specific reason
  • • The system draws on data about an individual from multiple sources to produce its output
  • • Your terms of service describe the system as a "decision-support" or "recommendation" tool in Annex III territory
  • • The system learns from its own outputs over time, creating feedback effects on future results

If any of these apply, your Art 6(3) argument is fragile. Regumatrix runs your system through the full Article 6 classification logic and tells you where you land — including every obligation that applies if you are high-risk.

Check my classification — 3 free analyses

Frequently asked questions

Does the Article 6(3) derogation apply to Annex I product safety AI?

No. Article 6(1) systems — AI used as safety components in medical devices, vehicles, aviation — are always high-risk. The derogation in Article 6(3) is only for Article 6(2) systems that appear in Annex III.

What counts as 'profiling' for the Article 6(3) override?

Profiling means automated processing of personal data used to evaluate aspects of a natural person — performance at work, economic situation, health, reliability, behaviour, location, or movements. If your system builds a picture of an individual from data, it is likely profiling. Annex III systems that do this are always high-risk, regardless of the other four conditions.

What happens if a regulator disagrees with my non-high-risk assessment?

If national competent authorities conclude your Article 6(3) assessment is wrong, your system is treated as high-risk from the start. Every high-risk obligation you failed to fulfil — risk management system, technical documentation, conformity assessment, human oversight controls, registration — becomes a separate infringement. Non-compliance with provider obligations under Article 16 carries fines up to €15 million or 3% of global turnover under Article 99(4), whichever is lower for SMEs.

Do I still need to register in the EU AI database under Article 6(3)?

Under current law, yes. Article 49(2) requires providers who conclude their system is not high-risk under Article 6(3) to register themselves and the system in the EU AI database before market placement. If COM(2025) 836 is enacted, Article 49(2) is deleted entirely and this registration requirement disappears.

Can the four derogation conditions change after the AI Act is in force?

Yes. Article 6(6) empowers the Commission to add new derogation conditions through delegated acts if evidence shows that certain Annex III AI systems do not pose significant risks. Article 6(7) allows the Commission to delete conditions where evidence shows the derogation undermines health, safety, or fundamental rights protection.

Related guides

Prohibited AI Practices (Art 5) →Is My AI High-Risk? Full Checklist →Conformity Assessment Guide →Technical Documentation (Art 11) →Risk Management System (Art 9) →EU AI Act Guide for SMEs & Startups →

Get your Article 6 classification confirmed

Regumatrix checks your system against all 113 articles of the EU AI Act — including the Article 6(3) four-condition test and profiling override — and returns your risk tier, Annex classification, every obligation that applies, and fine exposure under Article 99. 8-section cited report, ~30 seconds, no credit card required.

Start free — 3 analyses included