AI systems used for risk assessment and pricing of natural persons in life insurance and health insurance are high-risk under Annex III §5(c) of the EU AI Act. Full Chapter III obligations apply to providers. Insurance companies deploying this AI must conduct a Fundamental Rights Impact Assessment — even as private entities. Health data handling creates simultaneous GDPR obligations.
High-risk classification: full Chapter III obligations from 2 August 2026
Violations of high-risk AI system obligations under Chapter III carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover, whichever is higher. Providers must complete conformity assessment before market placement. Deploying insurance companies also have direct obligations and face the same fine exposure for Art 26 violations.
Does your insurance AI fall under Annex III §5(c)?
Regumatrix maps your system description against all Annex III categories and Art 5 prohibitions. You get your classification, exact obligations, FRIA requirement analysis, conformity assessment route, and fine exposure in a cited 8-section report — in about 30 seconds.
Check in 30 seconds — 3 free analysesAnnex III §5(c) — HIGH-RISK
AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life insurance and health insurance.
In scope — typical examples
Outside §5(c) scope
Distinction from Annex III §5(b) — creditworthiness
Annex III §5(b) separately covers AI for evaluating creditworthiness or establishing credit scores — that covers banking and lending AI. Annex III §5(c) is specifically insurance pricing and risk assessment for life and health products. They are distinct categories with the same obligation chain but different contexts. See the financial services AI guide for §5(b).
An InsurTech or analytics provider placing an Annex III §5(c) system on the market must satisfy all of the following before market placement.
Risk Management System
Document all foreseeable risks including actuarial bias from training data, discriminatory outcomes affecting protected characteristics, and misuse in pricing against individuals with pre-existing conditions.
Data Governance
Training datasets must be relevant, representative, and free of harmful bias. Particular scrutiny applies to historical insurance data that may reflect past discriminatory underwriting practices — and to health data that could encode socioeconomic proxies.
Technical Documentation (Annex IV)
Detailed documentation of system design, data sources, model architecture, known limitations, and performance on specific population subgroups. Must be kept current and available to market surveillance authorities.
Record-Keeping & Logging
Automatic event logging to enable audit trails. Insurers subject to inspection by national insurance supervisors and AI market surveillance authorities will need these logs to reconstruct individual pricing decisions.
Transparency & Instructions for Use
Instructions must clearly describe the system's capabilities, limitations, the categories of input data the model relies upon, and any known performance gaps across demographic subgroups. Insurers need this to operate the system lawfully.
Human Oversight
The system must allow trained human reviewers to understand and verify pricing outputs. For life and health insurance decisions affecting individuals, human review before final pricing decisions is strongly advisable.
Accuracy, Robustness & Cybersecurity
Must perform accurately for its stated purpose. Health insurance pricing AI is particularly sensitive to adversarial manipulation — where input data could be gamed to obtain lower premiums. Cybersecurity protections are mandatory.
Like most Annex III categories, §5(c) insurance AI allows a self-assessment under Art 43(2) and the Annex VI procedure. No mandatory notified body is required (unlike biometric identification or law enforcement AI). However, if the AI system is also embedded in a medical device or medical software product subject to EU harmonisation legislation, a notified body assessment under that sectoral legislation may also be required.
Self-assessment route (Annex VI):
Art 27(1) explicitly names §5(c) deployers
Article 27(1) imposes FRIA on two groups: (i) deployers that are bodies governed by public law, and (ii) deployers of high-risk AI systems referred to in points 5(b) and (c) of Annex III. An insurance company deploying a §5(c) life or health insurance pricing AI is in category (ii) — regardless of whether it is a public or private entity. FRIA is mandatory.
What the FRIA must cover
Where to send the FRIA
The FRIA must be provided to the national market surveillance authority on request (Art 27(4)). The insurer must also notify the national data protection authority in case of processing special category data — which health insurance AI almost always does.
FRIA vs. GDPR DPIA
The FRIA is separate from — and additive to — any GDPR Article 35 Data Protection Impact Assessment. Art 27(2) allows the insurer to use information from an existing DPIA as input to the FRIA. But the two documents address different legal obligations and must each be completed.
When to complete the FRIA
Before deploying the AI system. The FRIA is not a post-deployment retrospective — it must be done as part of procurement and onboarding of the AI tool. Some national authorities may require it as part of a conformity documentation package.
Why health insurance AI triggers GDPR Art 9
Any AI that prices or underwrites health insurance based on individual health characteristics will process health data — which is special category personal data under GDPR Article 9. Processing health data requires either explicit consent or one of the Art 9(2) derogations (e.g. for substantial public interest under Art 9(2)(g)).
Key dual-compliance checklist:
Life insurance pricing typically also involves predictive profiling of longevity — which may process genetic data or family medical history. Genetic data is also special category under GDPR Art 9(1). Extra caution is required.
Use in accordance with instructions for use
Deploy the AI strictly within its intended purpose as specified by the provider — for the risk/pricing functions it was designed and assessed for.
Assign human oversight personnel
Trained human reviewers must monitor AI-driven pricing outputs. They must be able to understand the AI's recommendations, verify them, and override where necessary.
Monitor for drift and unexpected outputs
Continuously monitor the AI system's operation. If the system produces systematically unusual pricing outputs — particularly for protected groups — suspend use and notify the provider and the market surveillance authority.
Retain logs for at least 6 months
Automatically generated logs from the AI system must be kept for at least 6 months. These are the audit trail for regulatory inspections and individual complaints.
Notify individuals that a high-risk AI system is being used
Where not already instructed by the provider via the instructions for use, inform individuals when a high-risk AI system is being used in relation to them — for example during the insurance application process.
Conduct FRIA before deployment
Art 27(1)(b): Deployers of §5(c) AI must complete the Fundamental Rights Impact Assessment before deploying. Available on request to the competent authority.
No changes are proposed under COM(2025) 836 or COM(2025) 837 specifically for Annex III §5(c) insurance AI obligations. The high-risk classification, FRIA requirement, and full Chapter III obligation chain for life and health insurance AI remain unchanged in both proposals.
Grey areas in insurance AI classification
Watch out — these scenarios commonly cause misclassification:
Regumatrix analyses your AI system description against all Annex III domains including §5(c) life and health insurance. You get your risk tier, classification, every obligation under Arts 9–15, FRIA requirement, GDPR overlap flags, conformity assessment route, and fine exposure — in a cited 8-section report.
Start free — no credit card3 free analyses included