RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceInsurance AI
Annex III §5(c) — Life & Health Insurance€15M / 3% high-risk penaltyFRIA mandatory for deployers

EU AI Act for Insurance AI (Annex III §5(c))

AI systems used for risk assessment and pricing of natural persons in life insurance and health insurance are high-risk under Annex III §5(c) of the EU AI Act. Full Chapter III obligations apply to providers. Insurance companies deploying this AI must conduct a Fundamental Rights Impact Assessment — even as private entities. Health data handling creates simultaneous GDPR obligations.

High-risk classification: full Chapter III obligations from 2 August 2026

Violations of high-risk AI system obligations under Chapter III carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover, whichever is higher. Providers must complete conformity assessment before market placement. Deploying insurance companies also have direct obligations and face the same fine exposure for Art 26 violations.

Does your insurance AI fall under Annex III §5(c)?

Regumatrix maps your system description against all Annex III categories and Art 5 prohibitions. You get your classification, exact obligations, FRIA requirement analysis, conformity assessment route, and fine exposure in a cited 8-section report — in about 30 seconds.

Check in 30 seconds — 3 free analyses

What is in scope: Annex III §5(c) — Life & Health Insurance

Annex III §5(c) — HIGH-RISK

AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life insurance and health insurance.

In scope — typical examples

  • • AI calculating life insurance premiums based on individual health data
  • • AI assessing health insurance eligibility or risk profiles
  • • AI tools for underwriting decisions in critical illness cover
  • • AI that scores an individual's health risk to determine policy cost
  • • AI predicting life expectancy to set annuity prices

Outside §5(c) scope

  • • AI for property insurance or motor insurance pricing
  • • AI for business/commercial insurance risk assessment
  • • Fraud detection AI (unless it also prices individual risk)
  • • Claims processing AI (post-contract, no pricing function)
  • • AI used only for group/collective risk pricing (not natural persons individually)

Distinction from Annex III §5(b) — creditworthiness

Annex III §5(b) separately covers AI for evaluating creditworthiness or establishing credit scores — that covers banking and lending AI. Annex III §5(c) is specifically insurance pricing and risk assessment for life and health products. They are distinct categories with the same obligation chain but different contexts. See the financial services AI guide for §5(b).

Full high-risk obligation chain (provider perspective)

An InsurTech or analytics provider placing an Annex III §5(c) system on the market must satisfy all of the following before market placement.

Art 9

Risk Management System

Document all foreseeable risks including actuarial bias from training data, discriminatory outcomes affecting protected characteristics, and misuse in pricing against individuals with pre-existing conditions.

Art 10

Data Governance

Training datasets must be relevant, representative, and free of harmful bias. Particular scrutiny applies to historical insurance data that may reflect past discriminatory underwriting practices — and to health data that could encode socioeconomic proxies.

Art 11

Technical Documentation (Annex IV)

Detailed documentation of system design, data sources, model architecture, known limitations, and performance on specific population subgroups. Must be kept current and available to market surveillance authorities.

Art 12

Record-Keeping & Logging

Automatic event logging to enable audit trails. Insurers subject to inspection by national insurance supervisors and AI market surveillance authorities will need these logs to reconstruct individual pricing decisions.

Art 13

Transparency & Instructions for Use

Instructions must clearly describe the system's capabilities, limitations, the categories of input data the model relies upon, and any known performance gaps across demographic subgroups. Insurers need this to operate the system lawfully.

Art 14

Human Oversight

The system must allow trained human reviewers to understand and verify pricing outputs. For life and health insurance decisions affecting individuals, human review before final pricing decisions is strongly advisable.

Art 15

Accuracy, Robustness & Cybersecurity

Must perform accurately for its stated purpose. Health insurance pricing AI is particularly sensitive to adversarial manipulation — where input data could be gamed to obtain lower premiums. Cybersecurity protections are mandatory.

Conformity assessment: self-assessment is available

Like most Annex III categories, §5(c) insurance AI allows a self-assessment under Art 43(2) and the Annex VI procedure. No mandatory notified body is required (unlike biometric identification or law enforcement AI). However, if the AI system is also embedded in a medical device or medical software product subject to EU harmonisation legislation, a notified body assessment under that sectoral legislation may also be required.

Self-assessment route (Annex VI):

  1. Apply harmonised standards where available (check CENELEC/CEN AI standards)
  2. Prepare technical documentation (Annex IV)
  3. Complete internal control procedures
  4. Draw up EU Declaration of Conformity (Art 47)
  5. Affix CE marking (Art 48)
  6. Register the system in the EU AI database (Art 49)

Read the full conformity assessment guide →

Fundamental Rights Impact Assessment — mandatory for insurance companies

Art 27(1) explicitly names §5(c) deployers

Article 27(1) imposes FRIA on two groups: (i) deployers that are bodies governed by public law, and (ii) deployers of high-risk AI systems referred to in points 5(b) and (c) of Annex III. An insurance company deploying a §5(c) life or health insurance pricing AI is in category (ii) — regardless of whether it is a public or private entity. FRIA is mandatory.

What the FRIA must cover

  • • Description of the deployer's processes using the AI
  • • Assessment of the impact on fundamental rights
  • • Populations and groups potentially affected
  • • Expected benefits and risks
  • • Risk mitigation measures planned

Where to send the FRIA

The FRIA must be provided to the national market surveillance authority on request (Art 27(4)). The insurer must also notify the national data protection authority in case of processing special category data — which health insurance AI almost always does.

FRIA vs. GDPR DPIA

The FRIA is separate from — and additive to — any GDPR Article 35 Data Protection Impact Assessment. Art 27(2) allows the insurer to use information from an existing DPIA as input to the FRIA. But the two documents address different legal obligations and must each be completed.

When to complete the FRIA

Before deploying the AI system. The FRIA is not a post-deployment retrospective — it must be done as part of procurement and onboarding of the AI tool. Some national authorities may require it as part of a conformity documentation package.

GDPR overlap: health data is special category data

Why health insurance AI triggers GDPR Art 9

Any AI that prices or underwrites health insurance based on individual health characteristics will process health data — which is special category personal data under GDPR Article 9. Processing health data requires either explicit consent or one of the Art 9(2) derogations (e.g. for substantial public interest under Art 9(2)(g)).

Key dual-compliance checklist:

  • ①Legal basis (GDPR Art 6 + Art 9(2)): Identify the Art 9(2) basis for health data processing. Explicit consent or substantial public interest is typically required.
  • ②DPIA (GDPR Art 35): Processing health data combined with profiling or automated decision-making almost always requires a DPIA. This is separate from the AI Act FRIA.
  • ③Data minimisation: The AI should use only the health data actually necessary for the pricing function. Art 10 of the AI Act also requires dataset governance.
  • ④Data subject rights: Individuals have GDPR rights to access, rectification, and objection. If the pricing model uses automated profiling, Art 22 GDPR applies — individuals have the right not to be subject to solely automated decisions with significant effects and to request human review.

Life insurance pricing typically also involves predictive profiling of longevity — which may process genetic data or family medical history. Genetic data is also special category under GDPR Art 9(1). Extra caution is required.

Deployer obligations (insurance companies using the AI)

Use in accordance with instructions for use

Deploy the AI strictly within its intended purpose as specified by the provider — for the risk/pricing functions it was designed and assessed for.

Art 26(1)

Assign human oversight personnel

Trained human reviewers must monitor AI-driven pricing outputs. They must be able to understand the AI's recommendations, verify them, and override where necessary.

Art 26(2)

Monitor for drift and unexpected outputs

Continuously monitor the AI system's operation. If the system produces systematically unusual pricing outputs — particularly for protected groups — suspend use and notify the provider and the market surveillance authority.

Art 26(5)

Retain logs for at least 6 months

Automatically generated logs from the AI system must be kept for at least 6 months. These are the audit trail for regulatory inspections and individual complaints.

Art 26(6)

Notify individuals that a high-risk AI system is being used

Where not already instructed by the provider via the instructions for use, inform individuals when a high-risk AI system is being used in relation to them — for example during the insurance application process.

Art 26(11)

Conduct FRIA before deployment

Art 27(1)(b): Deployers of §5(c) AI must complete the Fundamental Rights Impact Assessment before deploying. Available on request to the competent authority.

Art 27

No changes are proposed under COM(2025) 836 or COM(2025) 837 specifically for Annex III §5(c) insurance AI obligations. The high-risk classification, FRIA requirement, and full Chapter III obligation chain for life and health insurance AI remain unchanged in both proposals.

Grey areas in insurance AI classification

Watch out — these scenarios commonly cause misclassification:

  • • AI that aggregates group-level statistics but also computes individual scores at runtime
  • • Telematics and wearable-data health monitoring AI feeding insurance pricing models
  • • AI that makes underwriting recommendations (not final decisions) — still high-risk if output influences the pricing
  • • AI used in bancassurance products that combines §5(b) creditworthiness and §5(c) health insurance functions
Verify your classification — free

Frequently asked questions

Which insurance AI systems are high-risk under the EU AI Act?
Annex III §5(c) covers AI systems used for risk assessment and pricing of natural persons in the case of life and health insurance. This includes AI tools used by insurers to assess individual health risk profiles, calculate premiums based on individual characteristics, determine life insurance eligibility or pricing, and similar underwriting functions. Note: Annex III §5(b) covers creditworthiness and credit scoring AI — a separate high-risk category covered by the financial services guide.
Is a Fundamental Rights Impact Assessment (FRIA) mandatory for insurance companies using this AI?
Yes, explicitly. Article 27(1) lists deployers of high-risk AI systems 'referred to in points 5(b) and (c) of Annex III' as subject to the mandatory FRIA obligation — even though they are private entities, not public bodies. Insurance companies using Annex III §5(c) AI for life or health insurance pricing must complete a FRIA before deploying the system. The FRIA must be provided to national competent authorities on request.
How does GDPR interact with insurance AI under the EU AI Act?
Health insurance pricing AI is almost certain to process special category personal data (health data) under Article 9 GDPR. This requires either explicit consent or one of the limited Article 9(2) derogations. A GDPR Data Protection Impact Assessment (DPIA) may also be required under Article 35 GDPR. The EU AI Act FRIA does not replace the GDPR DPIA — they run in parallel. Article 27(2) of the EU AI Act does allow an insurer to use information from an existing DPIA as input to the FRIA, reducing duplication.
What conformity assessment route is available for insurance AI?
Self-assessment under Article 43(2) and the Annex VI procedure is available. Insurance AI under §5(c) does not require a mandatory notified body assessment (unlike biometric or law enforcement AI). The provider must verify compliance with harmonised standards where available, prepare Annex IV technical documentation, complete internal controls, draw up the EU Declaration of Conformity (Art 47), affix the CE marking (Art 48), and register in the EU AI database (Art 49).
What is the relationship between insurance AI and the financial services AI category?
Annex III separates two financial/insurance categories at §5: §5(b) covers creditworthiness assessment and credit scoring of natural persons, which captures banking and lending AI. §5(c) separately covers risk assessment and pricing for life and health insurance specifically. A single AI system used for both credit scoring and health insurance pricing would be captured by both. Systems purely for property and casualty insurance pricing (home, car, business) are not in §5(c) scope — that sub-category is limited to life and health insurance. Note: §5(d) separately captures AI for emergency dispatch prioritisation.

Related compliance guides

Financial Services AI — Annex III §5(b) CreditworthinessFRIA Guide (Art 27 — Art 6(2) Deployers)Conformity Assessment (Art 43 Self-Assessment)AI Deployer Obligations (Art 26)Is My AI High-Risk? (Full Checklist)EU AI Act vs. GDPR — How They Interact

Check your insurance AI in 30 seconds

Regumatrix analyses your AI system description against all Annex III domains including §5(c) life and health insurance. You get your risk tier, classification, every obligation under Arts 9–15, FRIA requirement, GDPR overlap flags, conformity assessment route, and fine exposure — in a cited 8-section report.

Start free — no credit card

3 free analyses included