RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeCompliancePublic Sector AI
Annex III §5(a) — Public Services€15M / 3% high-risk penalty2030 deadline for existing systems

EU AI Act for Public Sector & Government AI

Public authorities using AI to evaluate benefit eligibility, allocate social services, or assess healthcare entitlements are operating high-risk systems under Annex III §5(a). They face mandatory FRIA, EU database registration, and — uniquely — a 2 August 2030 compliance window for systems already in service. This guide covers what is in scope, what you must do, and when.

High-risk AI obligations apply to public authorities as deployers

The Art 99(4) penalty is up to €15 million or 3% of global annual turnover — whichever is higher — for high-risk obligation violations. Public authorities are not exempt. New systems deployed from 2 August 2026 must comply immediately; existing deployed systems must comply by 2 August 2030 under Art 111(2).

Not sure if your public sector AI system falls under Annex III §5(a)?

Regumatrix checks your system description against every Annex III domain and returns your risk tier, the exact obligations that apply, and your fine exposure under Article 99 — in about 30 seconds.

Check in 30 seconds — 3 free analyses

What counts as high-risk public sector AI (Annex III §5(a))

Annex III §5(a) covers: AI systems used by public authorities, or on behalf of public authorities, to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, and to grant, reduce, revoke, or reclaim those benefits and services.

In scope — typical examples

  • • Benefits eligibility assessment (housing, social welfare, unemployment)
  • • Healthcare service entitlement evaluation
  • • Social care needs assessment tools
  • • Automatic decisions to reduce or reclaim public benefit payments
  • • Emergency first response dispatch and priority AI (§5(d))

Not in scope — common misconceptions

  • • Back-office administrative automation with no decision effect on individuals
  • • Statistical analysis tools that don't affect individual entitlements
  • • AI that only prepares data for a human who makes the actual decision
  • • Tax fraud detection AI (this falls under different provisions)
  • • Procurement AI and contract analysis tools

The narrow exception in Art 6(3) allows a provider to argue a system is not high-risk even if it appears on Annex III — but only if it does not materially influence the outcome of individual decisions. Any system that performs profiling of natural persons is always considered high-risk, with no exception. Read the Art 6(3) derogation guide.

The 2030 deadline: what it means for existing public sector systems

Article 111(2) — Public authority extended deadline

High-risk AI systems already placed on the market or put into service before 2 August 2026, and intended for use by public authorities, must comply with the full EU AI Act by 2 August 2030 (52 months away). This window applies to both providers and deployers of those systems.

2 February 2025 — now in effect

Chapters I and II (Art 5 prohibited practices). No public sector AI can perform social scoring, mass biometric surveillance, or predictive policing — regardless of when it was deployed.

2 August 2026 — 4 months away

All high-risk AI Act obligations apply to NEW public sector AI systems placed on market or put into service from this date. No grace period for new procurement.

2 August 2030 — extended deadline for EXISTING systems

Existing public sector high-risk AI systems already deployed before 2 August 2026 must be fully compliant by this date. This covers the full obligation chain: risk management (Art 9), data governance (Art 10), technical documentation (Art 11), logging (Art 12), transparency (Art 13), human oversight (Art 14), accuracy/robustness (Art 15), FRIA (Art 27), and EU database registration (Art 49).

Full high-risk obligation chain for public sector AI

Being high-risk under Annex III §5(a) triggers the full Chapter III Section 2 obligation chain. These apply to the provider of the system and, separately, to the public authority deploying it.

Art 9

Risk Management System

Iterative process: identify, estimate, evaluate, and mitigate foreseeable risks throughout the system's lifecycle. Must be kept up to date.

Art 10

Data Governance

Training, validation, and testing data must be relevant, representative, free of errors and bias. Sensitive data can only be used under strict conditions.

Art 11

Technical Documentation (Annex IV)

17+ elements must be documented before market placement. SMEs get simplified forms. Public procurement should require this documentation from vendors.

Art 12

Record-Keeping & Logging

Automatic logs must be kept for the system's lifecycle. Deployers retain logs for at least six months under Art 26(6).

Art 13

Transparency & Instructions for Use

Provider must supply clear instructions for use. Deployers must use them. Public authority procuring AI must contractually require this documentation.

Art 14

Human Oversight

Competent, trained persons must be assigned oversight. They must be able to understand, monitor, and where necessary override or halt the system.

Art 15

Accuracy, Robustness & Cybersecurity

System must reach declared accuracy levels. Must be resilient against attempts to manipulate outputs. Particularly important for public service AI affecting vulnerable populations.

Obligations specific to public authority deployers

Beyond the standard deployer obligations in Art 26, public authorities face three additional requirements that private sector deployers do not face (or face to a lesser degree).

Art 27

Fundamental Rights Impact Assessment (FRIA) — mandatory

Art 27(1) requires FRIA from deployers that are bodies governed by public law — this covers virtually all government authorities. You must complete the FRIA before deploying the high-risk AI system.

The FRIA must cover: (a) a description of the deployer's processes where the system will be used; (b) the frequency and duration of use; (c) the categories of persons likely to be affected; (d) the specific risks of harm to those persons; (e) human oversight measures; and (f) measures to address those risks.

FRIA vs DPIA: If you have already conducted a GDPR Article 35 Data Protection Impact Assessment, the FRIA complements it — Art 27(4) says the FRIA shall complement the DPIA, not replace it. You need both.

Art 26(8)

EU AI database registration check — mandatory

Public authority deployers must comply with the registration obligations in Art 49. Before using any high-risk AI system, you must check that the provider has registered it in the EU database managed under Art 71. If the system is not registered, you must not use it and must inform the provider or distributor. This creates a procurement verification step: always confirm EU database registration before go-live.

Art 26(7)

Worker notification — if AI is used at the workplace

Before putting any high-risk AI system into service at the workplace, public sector employers (as deployers) must inform workers' representatives and the affected workers. This follows applicable Union and national law on worker information and consultation. For public authorities running high-risk AI that affects their own staff (e.g., performance monitoring) this is a legal obligation, not optional.

Art 26(11)

Notify individuals subject to AI decisions

Deployers of Annex III high-risk AI that make or assist in decisions about natural persons must inform those persons that they are subject to the use of a high-risk AI system. For public benefit eligibility AI, this means the citizen being assessed must be told AI is being used.

PROPOSAL — not yet enacted law

COM(2025) 836 — What changes for public sector AI

Confirms the 2030 deadline (Art 111 amendment)

836 explicitly amends Art 111(2) to re-confirm the 2 August 2030 deadline for public authority systems placed on market before August 2026. This removes any ambiguity that existed in the original text — if 836 is enacted, the 2030 date is cemented in revised statutory language.

Local public authorities explicitly included in SME guidance (Art 96 amendment)

836 amends Art 96 to specifically require Commission guidelines to pay particular attention to the needs of local public authorities — not just SMEs and start-ups. This signals that guidance materials will be adapted for public sector contexts.

Fundamental rights authority access strengthened (Art 77 amendment)

836 amends Art 77 to require market surveillance authorities to grant data protection authorities and other fundamental rights bodies access to AI system documentation. This strengthens oversight of public sector AI in practice.

Is your public sector AI system in the scope of Annex III §5(a)?

Watch out for these grey areas — they trigger high-risk obligations:

  • • AI that "recommends" a benefit decision but the caseworker always agrees
  • • AI that filters which cases get human review vs. which are auto-processed
  • • AI calculating "risk scores" for individuals in social care contexts
  • • AI supplied by a GovTech vendor running "on behalf of" a public authority
Verify your risk classification — free

Note for GovTech vendors supplying AI to public authorities

If you supply an AI system that a public authority uses to evaluate benefit eligibility, you are the provider of a high-risk AI system under Art 6(2) and bear the full Chapter III Section 2 obligations. The public authority is the deployer.

Provider responsibility summary:

  • • Build and maintain a risk management system (Art 9)
  • • Prepare Annex IV technical documentation (Art 11)
  • • Conduct conformity assessment — self-assessment (Annex VI) is permitted unless the system also falls under biometric/law-enforcement categories (Art 43)
  • • Complete EU Declaration of Conformity and affix CE marking (Art 47, Art 48)
  • • Register the system in the EU AI database (Art 49)
  • • Maintain post-market monitoring (Art 72)

Full provider obligations guide →

Public authority as deployer: the Art 26 obligation checklist

When a public authority procures and uses a high-risk AI system (rather than building it), it is a deployer under Art 26. Key obligations:

Use the system in accordance with the provider's instructions for use

Art 26(1)

Assign competent, trained persons for human oversight

Art 26(2)

Ensure input data is relevant and representative

Art 26(4)

Monitor operation and suspend use if a risk is identified

Art 26(5)

Retain automatically generated logs for at least 6 months

Art 26(6)

Inform workers before deploying AI at the workplace

Art 26(7)

Check EU database registration before use

Art 26(8)

Use Art 13 info to assist with GDPR Art 35 DPIA

Art 26(9)

Notify individuals that they are subject to AI decision-making

Art 26(11)

Cooperate with competent authorities on investigation

Art 26(12)

No significant changes are proposed under COM(2025) 837 specifically for public sector AI obligations. COM(2025) 836 does include the deadline confirmation and Art 77/96 changes noted above.

Frequently asked questions

Which public sector AI systems are high-risk under the EU AI Act?
Annex III §5(a) lists one specific public authority use case as high-risk: AI systems used by public authorities or on their behalf to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, and to grant, reduce, revoke, or reclaim such benefits. This covers benefits eligibility AI, social services assessment tools, housing benefit calculators, and healthcare service entitlement systems. AI for creditworthiness assessment (§5(b)) and insurance pricing (§5(c)) are separate high-risk categories covered by other guides.
When do public authorities need to comply with the EU AI Act?
The main EU AI Act high-risk obligations apply from 2 August 2026. However, Article 111(2) gives public authorities a special extended deadline: providers and deployers of high-risk AI systems intended to be used by public authorities that were placed on the market or put into service before 2 August 2026 must comply by 2 August 2030. This 2030 deadline is for existing deployed systems only — new systems procured or deployed from August 2026 onwards must comply immediately.
Is a Fundamental Rights Impact Assessment (FRIA) always mandatory for public sector AI?
Yes, for two overlapping reasons. First, Article 27(1) requires FRIA from deployers that are bodies governed by public law — which covers all government and public authority deployers of any high-risk AI system under Article 6(2). Second, deployers of Annex III §5(b) and §5(c) AI systems must also conduct FRIA even if they are private entities providing public services. The FRIA must be completed before deployment and notified to the market surveillance authority. If you have already done a GDPR Article 35 DPIA, the FRIA complements it — it does not replace it.
Do public authority deployers need to register in the EU AI database?
Yes. Article 26(8) explicitly requires deployers that are public authorities or Union institutions to comply with the registration obligations in Article 49. They must check whether the high-risk AI system they plan to use is registered in the EU database before using it. If the system is not registered, they must not use it and must inform the provider or distributor. Providers register the system; public authority deployers register their use of it.
What happens if a public authority uses a non-compliant AI system?
Public authorities that deploy a non-compliant high-risk AI system face enforcement action from the national market surveillance authority, not from the AI provider. They are also subject to the Article 99(4) penalty structure: up to €15 million or 3% of global annual turnover (whichever is higher), with the lower figure applying for SMEs and public bodies that are smaller entities. Article 26(5) also requires deployers to suspend use and report to the market surveillance authority if they reasonably consider the system presents a risk.

Related compliance guides

Fundamental Rights Impact Assessment (Art 27)AI Deployer Obligations (Art 26)EU Database Registration (Arts 49 & 71)Compliance Timeline 2025–2030Human Oversight (Art 14)Is My AI High-Risk? Checklist

Check your public sector AI in 30 seconds

Regumatrix analyses your AI system against every Annex III domain and Art 5 prohibition. You get: your risk tier, exact Annex III classification, every obligation that applies, your fine exposure under Art 99, and whether the 2030 grace period applies to your system — in a cited report.

Start free — no credit card

3 free analyses included