RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceMedical Device AI (SaMD)
Annex I §§11–12 — MDR / IVDR€15M / 3% high-risk penaltyArticle 6(1) pathway2 Aug 2027 deadline — 16 months away

EU AI Act for Medical Device Software (SaMD)

AI used as a safety component in a medical device regulated under the Medical Devices Regulation (MDR, Reg 2017/745) or the In Vitro Diagnostics Regulation (IVDR, Reg 2017/746) is high-risk under Article 6(1) of the EU AI Act. This is a different — and more demanding — pathway than Annex III. Conformity assessment follows the MDR/IVDR notified body route, with AI Act Chapter III Section 2 requirements added on top. COM(2025) 836 proposes extending the compliance deadline to 2 August 2028.

Art 6(1) classification: full Chapter III obligations from 2 August 2027

Violations of high-risk AI obligations under Chapter III carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover, whichever is higher. Non-compliance with the MDR/IVDR conformity assessment procedure — including its AI Act dimensions — also constitutes a breach of those sectoral regulations with separate enforcement consequences.

Is your medical AI high-risk under Art 6(1) or Annex III?

Regumatrix maps your system description against all classification rules — including the Art 6(1) Annex I pathway — and identifies your exact conformity route, obligation chain, and fine exposure in a cited 8-section report in about 30 seconds.

Check in 30 seconds — 3 free analyses

The Article 6(1) pathway — how it differs from Annex III

Most EU AI Act high-risk content focuses on Art 6(2) and Annex III — the eight sector-specific domains. But Article 6(1) is a separate, parallel pathway for AI embedded in products that already undergo mandatory third-party conformity assessment under EU product safety law.

Article 6(1) — HIGH-RISK (two conditions must both be met)

(a) The AI system is intended to be used as a safety component of a product — or is itself a product — covered by Union harmonisation legislation listed in Annex I of the EU AI Act; AND

(b) that product is required to undergo a third-party conformity assessment under that Annex I legislation.

Relevant Annex I entries for medical AI

  • Item 11 — MDR: Regulation (EU) 2017/745 on medical devices. Covers general medical devices — including class I (with sterile/measuring function), IIa, IIb, III. Class IIa and above require notified body involvement.
  • Item 12 — IVDR: Regulation (EU) 2017/746 on in vitro diagnostic medical devices. Covers class A (self-cert), B, C, D. Class B and above require notified body involvement for AI components.

In scope — typical SaMD examples

  • • AI diagnostic imaging software (radiology, pathology, ophthalmology)
  • • AI that generates clinical recommendations from patient data
  • • AI-powered IVD software for detecting cancer biomarkers
  • • AI monitoring ICU patient vitals and triggering clinical alerts
  • • AI for surgical planning or robotic guidance within an MDR device
  • • AI drug dosing calculators classified as a medical device

Outside Art 6(1) scope — may still trigger Annex III

  • • Clinical decision support tools that are NOT classified as medical devices under MDR
  • • Wellness or lifestyle AI apps (not medical devices)
  • • Hospital administration AI (scheduling, bed management)
  • • AI in class I MDR devices requiring no notified body assessment (self-cert)
  • • Research/investigational AI not yet placed on the market

Note: AI systems that fail the Art 6(1) test — because they are not safety components, or because the device requires no third-party assessment — may still be high-risk under Annex III §5(a) (healthcare AI — AI used to make, assist, or influence clinical decisions, triage, or similar). Always check both pathways.

Conformity assessment: MDR/IVDR procedure applies

Art 43(3) establishes the key rule: for AI systems covered by Annex I Section A legislation (MDR, IVDR, and others), the provider must follow the conformity assessment procedure required under that sectoral legislation — not the AI Act’s own internal control or Annex VII procedure. The MDR/IVDR notified body assessment therefore incorporates the AI Act requirements.

What Art 43(3) requires

  • 1. Follow the MDR/IVDR conformity assessment route (audit of quality management system + technical documentation) for your device class.
  • 2. The requirements of Chapter III Section 2 (Arts 9–15 — risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy) apply to the AI system and must be part of that MDR/IVDR assessment.
  • 3. The MDR/IVDR notified body is authorised to assess AI Act compliance, provided it meets the AI Act Article 31 requirements for notified body competence.
  • 4. If the MDR/IVDR legal act allows a manufacturing opt-out from third-party assessment (i.e., all harmonised standards applied), the manufacturer may use that opt-out only if it has also applied harmonised standards covering all AI Act Section 2 requirements.

Art 8(2) — documentation integration (reduces duplication)

Where a product contains an AI system subject to both Annex I harmonisation legislation (MDR/IVDR) and AI Act requirements, providers may integrate AI Act testing, reporting, and documentation into their existing MDR/IVDR technical file and procedures. You do not need a parallel, separate AI Act documentation package — incorporate it into the technical file you already maintain.

836 Omnibus clarification: COM(2025) 836 proposes a revised Art 43(3) that also explicitly allows MDR/IVDR notified bodies to assess quality management systems under the AI Act as part of the MDR/IVDR QMS audit. Where a system is covered by both Annex I and Annex III, the Annex I (MDR/IVDR) procedure takes precedence.

Full high-risk obligation chain (provider perspective)

Whether assessed via MDR, IVDR, or both, the provider of a high-risk medical AI must satisfy all Chapter III Section 2 obligations before market placement.

Art 9

Risk Management System

Establish and maintain a risk management system throughout the AI system's lifecycle. For medical device AI, this integrates with the MDR/IVDR risk management requirements under ISO 14971. Document all foreseeable risks to health and safety, including failure modes specific to AI models (dataset shift, distributional mismatch, adversarial inputs).

Art 10

Data Governance

Training, validation, and test data must be relevant, representative, and free from harmful bias. For medical AI, this means demonstrating performance across patient demographics (age, sex, ethnicity, comorbidities). Medical imaging AI must show performance is not degraded by differences in scanner models or acquisition protocols.

Art 11

Technical Documentation (Annex IV)

Comprehensive AI-specific documentation including system design, architecture, intended purpose, known limitations, and validation performance. This can be integrated into the MDR/IVDR technical file under Art 8(2). Must document the AI model, training methodology, and post-market monitoring plan.

Art 12

Record-Keeping & Logging

Automatic event logging for the duration of the AI system's lifetime placed on market — or at minimum the log retention period defined by applicable MDR/IVDR rules. Logs must allow reconstruction of clinical recommendations and identify input data used.

Art 13

Transparency & Instructions for Use

The IFU must clearly describe AI capabilities, known performance limitations (especially across population subgroups), the nature and format of input data, and how outputs should be interpreted. Radiologists or clinicians using the AI must understand its intended operating conditions.

Art 14

Human Oversight

The system must allow appropriately qualified clinical professionals to understand, oversee, and override AI outputs. For diagnostic AI, this means the output must function as a decision-support tool — the clinician retains final responsibility. Automated systems that bypass human review are a red flag under both the AI Act and MDR Good Clinical Practice.

Art 15

Accuracy, Robustness & Cybersecurity

Must maintain performance across the input domain including edge cases. For medical AI, performance against the intended patient population must be validated on a sufficiently diverse test set. Cybersecurity is particularly critical — manipulation of AI inputs in medical settings could cause patient harm.

Compliance deadline: 2 August 2027 (current law)

Article 113(c) — deferred deadline for Art 6(1) systems

Article 113 of the EU AI Act grants a deferred deadline specifically for Art 6(1) systems: "Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027" — one year after the general August 2026 application date. This deferred deadline acknowledges the additional complexity of the MDR/IVDR conformity route.

What changes in the period August 2025 – August 2027

Chapter III Section 4, Chapter V, Chapter VII, and Chapter XII apply from 2 August 2025. This means notified body accreditation, market surveillance notification frameworks, and governance structures are in place. New GPAI obligations are also live from August 2025. The Article 9–15 obligations — the substantive high-risk AI requirements — apply from 2 August 2027 for Art 6(1) systems.

PROPOSAL — not yet enacted law

COM(2025) 836: Deadline extended to 2 August 2028 (Annex I systems)

The Omnibus Simplification proposal COM(2025) 836 adds a new point (d) to Article 113 that fundamentally changes the compliance timeline for all high-risk AI — including medical device AI.

New Art 113 point (d) — proposed mechanism

Chapter III Sections 1, 2, and 3 (the high-risk AI classification and obligation rules) shall apply 12 months after a Commission decision confirming that adequate compliance support measures (harmonised standards, common specifications, guidelines) are available for Annex I systems. In the absence of that decision within the required timeframe — or where the resulting date is later — the obligations apply from 2 August 2028 for systems classified as high-risk under Art 6(1) and Annex I.

Current law deadline

2 August 2027 (16 months away)

Art 113(c) — already enacted

Proposed fallback deadline

2 August 2028 (28 months away)

COM(2025) 836 — pending agreement

Practical implication

If COM(2025) 836 is enacted, medical device AI providers would gain an additional year of preparation time before substantive AI Act obligations apply. However, MDR/IVDR obligations continue unaffected — only the AI Act deadline shifts. Providers should not pause MDR/IVDR quality system work in anticipation of the 836 deadline extension. The 836 also clarifies the Art 43(3) conformity route for Annex I Section A systems and enables QMS integration between MDR/IVDR and AI Act requirements.

Dual compliance complexity: MDR/IVDR + AI Act

Common grey areas in medical AI classification:

  • • AI that aids clinician decisions without qualifying as a medical device under MDR — could still be Annex III §5(a) healthcare AI
  • • Class I MDR devices with no notified body requirement — Art 6(1)(b) not met, so not high-risk via that pathway (check Annex III)
  • • AI as a standalone product (e.g., diagnostic app) vs. AI as a component within a device — both can be Art 6(1) if safety component condition is met
  • • IVDR Class A devices with self-certification — same analysis: no notified body → Art 6(1)(b) not met
Get your classification in 30 seconds

Frequently asked questions

Which medical AI systems are high-risk under Article 6(1) of the EU AI Act?
An AI system is high-risk under Article 6(1) where two conditions are both met: (a) the AI is a safety component of a product — or is itself a product — covered by Union harmonisation legislation listed in Annex I; and (b) that product is required to undergo a third-party conformity assessment under that Annex I legislation. For medical AI, the relevant Annex I legislation is Regulation (EU) 2017/745 (MDR) at item 11 and Regulation (EU) 2017/746 (IVDR) at item 12. Most class IIa, IIb, and III medical devices — and corresponding IVDR devices — require notified body involvement, so their AI components will be high-risk under Art 6(1).
Does the EU AI Act add obligations on top of MDR/IVDR, or does it replace them?
The EU AI Act is additive — it does not replace MDR or IVDR requirements. Article 8(2) explicitly requires providers to comply with all applicable Union harmonisation legislation. However, Art 8(2) also allows integration: providers may incorporate AI Act testing, reporting, and documentation into their existing MDR/IVDR technical file, reducing duplication. The MDR/IVDR notified body notified under those regulations also has the power to assess AI Act compliance, so you do not need two separate notified body processes.
What is a 'safety component' of a medical device?
A safety component is an AI system whose failure or malfunctioning could endanger the health or safety of persons, or whose correct functioning is essential to maintaining safety. In the medical device context, this typically includes: AI that interprets medical images to produce diagnoses or clinical recommendations, AI that calculates drug dosing recommendations, AI that monitors patient vital signs and triggers clinical alerts, and AI used in surgical planning or robotic guidance. An AI used purely for administrative functions (e.g., appointment scheduling) within a device ecosystem is unlikely to be a safety component.
Which conformity assessment route applies to medical device AI?
Article 43(3) of the EU AI Act requires providers of AI systems covered by Annex I Section A legislation (which includes MDR and IVDR) to follow the conformity assessment procedure required under that sectoral legislation — not the AI Act's own self-assessment or Annex VII procedure. The MDR/IVDR notified body assessment therefore covers AI Act compliance as well. The requirements of Chapter III Section 2 (Arts 9–15) apply to those systems and must be part of the MDR/IVDR assessment. Article 8(2) also allows you to integrate AI Act documentation into your existing MDR/IVDR technical file.
When does the Article 6(1) compliance deadline apply, and what does COM(2025) 836 propose?
Article 113(c) of the current EU AI Act sets the compliance deadline for Article 6(1) systems — including medical device AI — at 2 August 2027. COM(2025) 836 (the Omnibus proposal) proposes a new point (d) to Article 113 that delays this further: for Annex I systems, the obligations in Chapter III Sections 1–3 would apply 12 months after a Commission decision confirming that adequate compliance support is available. In the absence of that decision in time, the fallback date is 2 August 2028. This is a proposal pending Council and Parliament agreement and is not yet law.

Related compliance guides

Healthcare AI — Annex III §5(a) Clinical Decision SupportConformity Assessment (Art 43 — All Routes Explained)Is My AI High-Risk? (Art 6(1) + Annex III Checklist)COM(2025) 836 Omnibus — Full OverviewTechnical Documentation Requirements (Annex IV)Risk Management System — Step-by-Step Guide (Art 9)

Check your medical AI compliance in 30 seconds

Regumatrix analyses your system, identifies whether you face the Art 6(1) pathway or Annex III, maps your MDR/IVDR conformity route, and lists every Chapter III Section 2 obligation — cited to the exact article.

Start free — 3 analyses included