RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
Risk Reduction Pathfinder

Your AI got classified high-risk.
Now what?

The EU AI Act assigns obligations based on risk level, but it also defines the conditions under which that classification can change. Before committing to months of conformity assessment, it's worth knowing which options genuinely apply to your system.

Run the free preflight checkHow it works ↓

What high-risk actually means

The classification has real consequences

When an AI system gets flagged as high-risk under the EU AI Act, the instinct is usually to either panic or defer. Both are expensive in different ways. In practice, high-risk status means a documented compliance program: technical documentation, conformity assessment processes, registration in the EU's public AI database, human oversight mechanisms, incident logging requirements, and recertification every time the system changes substantially.

For a startup, that's a real and recurring cost. For a larger organisation, it compounds across every AI system in the portfolio. The regulation was designed to be proportionate: the compliance burden is meant to match the actual level of risk a given system poses in its specific context. That same principle of proportionality is also why the regulation contains structured mechanisms for questioning the classification you were assigned.

"The same regulation that creates the classification also provides the framework for challenging it."

The mechanisms aren't loopholes. They're deliberate design choices, built in because the regulation's authors understood that the same technology can pose very different levels of actual risk depending on context, deployment environment, intended purpose, and who bears responsibility for it. Whether any of them apply to your system is a specific question. This tool answers it.

What gets evaluated

Six statutory paths. All of them checked.

Each path represents a mechanism written into the EU AI Act itself. The analysis evaluates all six against your specific system, grounded in the description you provide rather than a generic checklist.

Intended purpose

Reframe what the system is for

High-risk classification depends on documented intent, not just technical capability. A carefully specified intended purpose that reflects actual deployment — rather than the broadest conceivable interpretation — can shift the classification without changing a line of code.

Proportionate risk

Make the case for your specific context

Even within a recognised high-risk category, the regulation gives providers a structured way to demonstrate their specific implementation doesn't pose the level of risk the category assumes. This requires evidence and documentation — not a simple declaration.

Scope exclusions

You might not be in scope at all

The regulation was written with explicit carve-outs. Military and national security uses, purely personal activities, and certain research contexts fall outside its reach entirely. If any of these apply, the classification question may be secondary to the coverage question.

Model type

General-purpose models play by different rules

General-purpose AI carries a different — and often lighter — compliance regime than purpose-specific high-risk systems. If your system is genuinely general in nature, or if risk stems from how others deploy it rather than from the model itself, the regulatory picture changes materially.

Supply chain role

Provider and deployer are not the same thing

The regulation places heavier obligations on providers — those who develop and place AI on the market — than on deployers who integrate pre-built systems into their own context. If your actual relationship to the system is that of an integrator, the compliance burden may be substantially different.

Transitional timelines

Legacy systems may have more runway

Systems already in use before certain regulatory milestones may be eligible for extended compliance windows — particularly where they haven't been substantially modified. Not a permanent exemption, but it can change the timeline pressure significantly.

The process

How it works

1

Describe your system

What does it do, who uses it, in what deployment context, and what decisions or recommendations does it influence? The more specific you are, the more precise the analysis.

2

Get the free preflight grade

The tool assesses whether viable paths are likely to exist for your system. You get a green, amber, or red grade with the reasoning shown upfront so you can decide whether to proceed.

3

Run the full analysis

Each of the six paths evaluated in detail: feasibility reasoning, barriers identified, surviving obligations listed, and a per-path implementation roadmap. Exportable as PDF.

The output

What the analysis produces

Not a score. Not a summary slide. A structured analysis with five concrete sections you can bring to your legal team or use to direct compliance work.

Preflight grade

A free preflight assessment tells you whether viable paths are likely to exist for your system — green, amber, or red, with the reasoning surfaced upfront before you commit to a full analysis.

Barrier identification

Some paths are categorically unavailable. Systems touching prohibited practices have no reclassification argument — the prohibition is absolute. The analysis flags these first so you're not exploring closed doors.

Path-by-path evaluation

Each of the six statutory paths is assessed against your system with a feasibility signal — distinguishing options that need only documentation changes from those requiring structural changes to the system itself.

What survives every path

Some obligations apply regardless of classification outcome. Literacy requirements, certain transparency rules, and data governance interactions survive most pathways. These are listed so nothing falls through the cracks.

Implementation roadmap

For each viable path, the analysis produces a concrete next-step plan: what needs to change, in what order, and what the compliance picture looks like after. Exportable as PDF.

Honest description

A few things worth saying clearly

The law is the law. This tool doesn't change it, bend it, or find ways around it. It maps what the law itself already says about when a different classification might apply. That's a narrower claim than most tools make, and we think it's the right one.

What it does

  • An analysis grounded in the actual text of the EU AI Act
  • A systematic way to know which questions are worth asking
  • A free preflight check before you commit to a full analysis
  • A documented starting point you can bring to legal counsel
  • An honest answer — even when that answer is 'no viable paths exist'

What it doesn't do

  • A legal opinion or compliance certificate
  • A guarantee any path will hold under regulatory scrutiny
  • A substitute for qualified legal counsel
  • Designed to make a genuinely risky system look benign

The regulation was written by humans, will be interpreted by human regulators, and is already being amended by human legislators. It will change. What won't change is that accountability for genuinely risky systems exists for a reason. A good analysis tool doesn't pretend otherwise.

Questions

Things people ask

Isn't this just a way to avoid EU AI Act compliance?

No. Every path the tool evaluates is written into the regulation itself. If your system qualifies for lower classification, it's because the law determined your specific context poses proportionately lower risk. If it doesn't qualify, the tool will tell you that too. The analysis doesn't manufacture compliance — it finds out whether it already exists.

Do I still need a lawyer after using this?

Almost certainly, especially if a viable path surfaces. The tool identifies which options are worth pursuing. A lawyer can validate whether the reasoning holds under scrutiny, help you document the derogation correctly, and advise on how regulators in your jurisdiction are likely to interpret it. Think of this as the research that makes your legal consultation significantly more focused.

What if no paths apply to my system?

The preflight check will surface that before you run the full analysis. Some AI systems are genuinely high-risk with no viable reclassification argument. Knowing that clearly — with documented reasoning behind it — is itself useful. It tells you where to direct your compliance investment instead of pursuing options that were never available.

How accurate is the analysis?

The analysis reflects the regulation as currently in force, including the Digital Omnibus proposals. It's as accurate as the description you provide — vague inputs produce speculative conclusions, and the tool flags that distinction explicitly. It separates well-supported conclusions from those that depend on assumptions.

Start with the free preflight check

No setup. No forms. Describe your AI system and find out whether paths are worth exploring, before you spend anything.

Try the Pathfinder free

Free preflight check · No card required · Results in under a minute