Public authorities using AI to evaluate benefit eligibility, allocate social services, or assess healthcare entitlements are operating high-risk systems under Annex III §5(a). They face mandatory FRIA, EU database registration, and — uniquely — a 2 August 2030 compliance window for systems already in service. This guide covers what is in scope, what you must do, and when.
High-risk AI obligations apply to public authorities as deployers
The Art 99(4) penalty is up to €15 million or 3% of global annual turnover — whichever is higher — for high-risk obligation violations. Public authorities are not exempt. New systems deployed from 2 August 2026 must comply immediately; existing deployed systems must comply by 2 August 2030 under Art 111(2).
Not sure if your public sector AI system falls under Annex III §5(a)?
Regumatrix checks your system description against every Annex III domain and returns your risk tier, the exact obligations that apply, and your fine exposure under Article 99 — in about 30 seconds.
Check in 30 seconds — 3 free analysesAnnex III §5(a) covers: AI systems used by public authorities, or on behalf of public authorities, to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, and to grant, reduce, revoke, or reclaim those benefits and services.
In scope — typical examples
Not in scope — common misconceptions
The narrow exception in Art 6(3) allows a provider to argue a system is not high-risk even if it appears on Annex III — but only if it does not materially influence the outcome of individual decisions. Any system that performs profiling of natural persons is always considered high-risk, with no exception. Read the Art 6(3) derogation guide.
Article 111(2) — Public authority extended deadline
High-risk AI systems already placed on the market or put into service before 2 August 2026, and intended for use by public authorities, must comply with the full EU AI Act by 2 August 2030 (52 months away). This window applies to both providers and deployers of those systems.
2 February 2025 — now in effect
Chapters I and II (Art 5 prohibited practices). No public sector AI can perform social scoring, mass biometric surveillance, or predictive policing — regardless of when it was deployed.
2 August 2026 — 4 months away
All high-risk AI Act obligations apply to NEW public sector AI systems placed on market or put into service from this date. No grace period for new procurement.
2 August 2030 — extended deadline for EXISTING systems
Existing public sector high-risk AI systems already deployed before 2 August 2026 must be fully compliant by this date. This covers the full obligation chain: risk management (Art 9), data governance (Art 10), technical documentation (Art 11), logging (Art 12), transparency (Art 13), human oversight (Art 14), accuracy/robustness (Art 15), FRIA (Art 27), and EU database registration (Art 49).
Being high-risk under Annex III §5(a) triggers the full Chapter III Section 2 obligation chain. These apply to the provider of the system and, separately, to the public authority deploying it.
Risk Management System
Iterative process: identify, estimate, evaluate, and mitigate foreseeable risks throughout the system's lifecycle. Must be kept up to date.
Data Governance
Training, validation, and testing data must be relevant, representative, free of errors and bias. Sensitive data can only be used under strict conditions.
Technical Documentation (Annex IV)
17+ elements must be documented before market placement. SMEs get simplified forms. Public procurement should require this documentation from vendors.
Record-Keeping & Logging
Automatic logs must be kept for the system's lifecycle. Deployers retain logs for at least six months under Art 26(6).
Transparency & Instructions for Use
Provider must supply clear instructions for use. Deployers must use them. Public authority procuring AI must contractually require this documentation.
Human Oversight
Competent, trained persons must be assigned oversight. They must be able to understand, monitor, and where necessary override or halt the system.
Accuracy, Robustness & Cybersecurity
System must reach declared accuracy levels. Must be resilient against attempts to manipulate outputs. Particularly important for public service AI affecting vulnerable populations.
Beyond the standard deployer obligations in Art 26, public authorities face three additional requirements that private sector deployers do not face (or face to a lesser degree).
Fundamental Rights Impact Assessment (FRIA) — mandatory
Art 27(1) requires FRIA from deployers that are bodies governed by public law — this covers virtually all government authorities. You must complete the FRIA before deploying the high-risk AI system.
The FRIA must cover: (a) a description of the deployer's processes where the system will be used; (b) the frequency and duration of use; (c) the categories of persons likely to be affected; (d) the specific risks of harm to those persons; (e) human oversight measures; and (f) measures to address those risks.
FRIA vs DPIA: If you have already conducted a GDPR Article 35 Data Protection Impact Assessment, the FRIA complements it — Art 27(4) says the FRIA shall complement the DPIA, not replace it. You need both.
EU AI database registration check — mandatory
Public authority deployers must comply with the registration obligations in Art 49. Before using any high-risk AI system, you must check that the provider has registered it in the EU database managed under Art 71. If the system is not registered, you must not use it and must inform the provider or distributor. This creates a procurement verification step: always confirm EU database registration before go-live.
Worker notification — if AI is used at the workplace
Before putting any high-risk AI system into service at the workplace, public sector employers (as deployers) must inform workers' representatives and the affected workers. This follows applicable Union and national law on worker information and consultation. For public authorities running high-risk AI that affects their own staff (e.g., performance monitoring) this is a legal obligation, not optional.
Notify individuals subject to AI decisions
Deployers of Annex III high-risk AI that make or assist in decisions about natural persons must inform those persons that they are subject to the use of a high-risk AI system. For public benefit eligibility AI, this means the citizen being assessed must be told AI is being used.
Confirms the 2030 deadline (Art 111 amendment)
836 explicitly amends Art 111(2) to re-confirm the 2 August 2030 deadline for public authority systems placed on market before August 2026. This removes any ambiguity that existed in the original text — if 836 is enacted, the 2030 date is cemented in revised statutory language.
Local public authorities explicitly included in SME guidance (Art 96 amendment)
836 amends Art 96 to specifically require Commission guidelines to pay particular attention to the needs of local public authorities — not just SMEs and start-ups. This signals that guidance materials will be adapted for public sector contexts.
Fundamental rights authority access strengthened (Art 77 amendment)
836 amends Art 77 to require market surveillance authorities to grant data protection authorities and other fundamental rights bodies access to AI system documentation. This strengthens oversight of public sector AI in practice.
Is your public sector AI system in the scope of Annex III §5(a)?
Watch out for these grey areas — they trigger high-risk obligations:
If you supply an AI system that a public authority uses to evaluate benefit eligibility, you are the provider of a high-risk AI system under Art 6(2) and bear the full Chapter III Section 2 obligations. The public authority is the deployer.
Provider responsibility summary:
When a public authority procures and uses a high-risk AI system (rather than building it), it is a deployer under Art 26. Key obligations:
Use the system in accordance with the provider's instructions for use
Art 26(1)
Assign competent, trained persons for human oversight
Art 26(2)
Ensure input data is relevant and representative
Art 26(4)
Monitor operation and suspend use if a risk is identified
Art 26(5)
Retain automatically generated logs for at least 6 months
Art 26(6)
Inform workers before deploying AI at the workplace
Art 26(7)
Check EU database registration before use
Art 26(8)
Use Art 13 info to assist with GDPR Art 35 DPIA
Art 26(9)
Notify individuals that they are subject to AI decision-making
Art 26(11)
Cooperate with competent authorities on investigation
Art 26(12)
No significant changes are proposed under COM(2025) 837 specifically for public sector AI obligations. COM(2025) 836 does include the deadline confirmation and Art 77/96 changes noted above.
Regumatrix analyses your AI system against every Annex III domain and Art 5 prohibition. You get: your risk tier, exact Annex III classification, every obligation that applies, your fine exposure under Art 99, and whether the 2030 grace period applies to your system — in a cited report.
Start free — no credit card3 free analyses included