RegumatrixBeta
GuidesPathfinderAI RightsFreeAbout
Sign inGet Started Free

Reference

  • All Articles
  • Official Text ↗

Compliance Guides

  • Compliance Timeline
  • High-Risk Checklist
  • Healthcare AI
  • HR & Recruitment
  • Financial Services
  • GPAI / Foundation Models
  • View all guides →

Product

  • Risk Pathfinder
  • AI Rights Check
  • Get Started Free
  • About
  • Feedback
  • Contact

Legal

  • Privacy Policy
  • Terms

Regumatrix — AI compliance powered by Regulation (EU) 2024/1689

This tool is informational only and does not constitute legal advice.

Grounded in Regulation (EU) 2024/1689 · verified 4 Apr 2026
HomeComplianceImmigration & Border Control AI
Annex III §7 — Migration & Border Control€15M / 3% high-risk penaltyFundamental rights sensitive2 Aug 2026 — 4 months away

EU AI Act for Immigration & Border Control AI (Annex III §7)

AI used in migration risk assessment, asylum and visa application processing, and biometric identification at borders is high-risk under Annex III §7 of the EU AI Act — one of the most fundamental-rights-sensitive categories. Full Chapter III obligations apply to providers and deployers. Self-assessment is available for most §7 systems. COM(2025) 836 proposes extending the compliance deadline to 2 December 2027.

High-risk classification across all four §7 categories — from 2 August 2026

Violations carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover, whichever is higher. AI used in asylum decisions, visa assessments, or border crossing risk scoring affects fundamental rights — including the right to asylum (Art 18 Charter), non-refoulement, and the right to effective judicial protection (Art 47 Charter). The obligation chain is non-negotiable.

Does your immigration or border AI fall under Annex III §7?

Regumatrix identifies whether your system falls under §7(a)–(d), checks for Art 5 biometric overlap, maps your conformity route, and lists every Chapter III obligation — cited to the exact article — in about 30 seconds.

Check in 30 seconds — 3 free analyses

What is in scope: Annex III §7 — four categories

Annex III §7 — HIGH-RISK (in so far as use is permitted under relevant Union or national law)

§7(a)

AI used by or on behalf of competent public authorities or by Union institutions as polygraphs or similar tools.

§7(b)

AI to assess a risk — including a security risk, a risk of irregular migration, or a health risk — posed by a natural person who intends to enter or who has entered into the territory of a Member State.

§7(c)

AI to assist competent public authorities in examining applications for asylum, visa, or residence permits and for associated complaints — including assessing the reliability of evidence.

§7(d)

AI for detecting, recognising, or identifying natural persons in the context of migration, asylum, or border control management — with the exception of the verification of travel documents.

Key qualifier: used by or on behalf of competent public authorities

All four §7 categories require that the AI is used by or on behalf of competent public authorities or by Union institutions, bodies, offices, or agencies. Private companies providing immigration advisory services that are not acting on behalf of a competent authority are not directly captured. Technology vendors supplying the AI to a public authority are the provider under the AI Act; the authority is the deployer.

In scope — examples

  • • Automated passport control e-gate facial recognition systems (§7(d))
  • • AI scoring the risk of irregular border crossing for each traveller (§7(b))
  • • AI that analyses asylum claim documents for consistency and credibility (§7(c))
  • • AI lie-detection tools used in border interviews (§7(a))
  • • AI used by Frontex or national border agencies for surveillance and alerting (§7(b)/(d))

Outside §7 scope

  • • AI for verifying travel documents (explicitly excluded from §7(d))
  • • AI used by private immigration law firms not acting as agent of competent authority
  • • General analytics dashboards used by immigration ministries for policy planning (no individual-level assessment)
  • • Translation AI used in asylum interviews (no risk assessment or decision function)

Why §7 is the most fundamental-rights-sensitive category

Charter rights directly at stake

Art 18 EU Charter — Right to asylum

AI that assists or influences asylum decisions directly engages the fundamental right to asylum. Errors — including systematic algorithmic errors — can lead to refoulement.

Art 47 EU Charter — Right to effective remedy

The Art 14 human oversight requirement has heightened significance here: individuals must be able to challenge decisions, and human review must be genuinely meaningful — not a rubber-stamp of AI outputs.

Art 21 EU Charter — Non-discrimination

Risk scoring AI must be rigorously validated for discriminatory outcomes across nationality, ethnicity, and religion characteristics. Proxy discrimination through national/regional origin is a key risk.

Non-refoulement (Art 19 Charter, Art 3 ECHR)

AI error in asylum credibility assessment that leads to removal to a country where the person faces serious harm constitutes a breach of the absolute prohibition on refoulement.

Full high-risk obligation chain (provider perspective)

The obligations below apply to AI providers under the EU AI Act. Every requirement has heightened practical importance in the immigration and asylum context given the potential irreversible impact of errors on the persons affected.

Art 9

Risk Management System

Document all foreseeable risks — including the risk of systematic discriminatory outcomes, misclassification of protected characteristics as risk indicators, and adversarial manipulation of risk scores. For asylum AI, the risk of false negatives (failing to identify genuine asylum seekers) must be explicitly assessed as a fundamental rights risk, not just an accuracy metric.

Art 10

Data Governance

Training data must be carefully examined for historical bias. Immigration and border AI often trains on historical decision data that may reflect past discriminatory enforcement patterns. Data governance must ensure protected characteristics (nationality, ethnicity, religion) do not operate as de facto risk factors through proxy variables.

Art 11

Technical Documentation (Annex IV)

Must document performance metrics across nationality, gender, and age sub-groups. For asylum interview analysis AI, document the languages and dialects supported and known performance degradation for under-represented languages.

Art 12

Record-Keeping & Logging

Comprehensive automatic logging is essential for this category. Logs must be retained to enable individuals to understand and challenge decisions made with AI assistance. Note: confidential operational data linked to law enforcement and border functions has special handling rules under Art 78 of the AI Act.

Art 13

Transparency & Instructions for Use

Deploying authorities must be clearly instructed on the system's limitations, the conditions under which outputs may be unreliable, and the human review procedures required. The instructions must make clear that AI outputs are advisory — not determinative — for individual cases.

Art 14

Human Oversight

This is the most critical requirement in the §7 context. Human oversight must be substantive — not a rubber-stamp. Qualified case officers must be able to override AI recommendations and must receive meaningful information about the basis of AI outputs. Automated refusal of asylum or visa applications based solely on AI output without genuine human assessment is inconsistent with Art 14.

Art 15

Accuracy, Robustness & Cybersecurity

Performance must be validated on populations representative of those who will be assessed — diverse nationalities, languages, and socioeconomic backgrounds. Systems assessing re-entry risk or document consistency must be robust against adversarial inputs. Cybersecurity is critical: manipulation of a border risk-scoring system could systematically distort decisions at scale.

Conformity assessment and biometric AI crossover

Standard route: Art 43(2) — self-assessment for §7 systems

Under Art 43(2), AI systems in Annex III points 2 to 8 — including §7 immigration and border AI — must follow the internal control procedure (Annex VI). No notified body is required for the standard §7 conformity assessment. The provider conducts the assessment, prepares the technical documentation, and draws up the EU Declaration of Conformity before market placement.

Important exception: §7(d) biometric overlap with Annex III §1

Where a §7 system also constitutes a biometric AI system under Annex III §1 (e.g., a facial recognition system used at borders), Art 43(1) applies instead of Art 43(2). Under Art 43(1), where the system is intended for use by immigration or asylum authorities, the market surveillance authority acts as the notified body — not a commercial conformity assessment body. This ensures independent public oversight for the most rights-sensitive biometric identification uses.

Self-assessment route (Annex VI — for non-biometric §7 systems):

  1. Apply harmonised standards where available
  2. Prepare Annex IV technical documentation
  3. Complete internal control assessment against all Arts 9–15
  4. Draw up EU Declaration of Conformity (Art 47)
  5. Affix CE marking (Art 48)
  6. Register in the EU AI database (Art 49)

Article 5(1)(h) — real-time biometric identification at borders

Prohibited by default — with narrow exceptions

Art 5(1)(h) prohibits the use of real-time remote biometric identification (RTBI) systems in publicly accessible spaces by law enforcement. This prohibition applies at borders where RTBI is used in real time on persons in publicly accessible border areas.

The narrow exceptions in Art 5(1)(h)(i)–(iii) include:

  • (i) Targeted searches for missing persons — including victims of trafficking or kidnapping.
  • (ii) Prevention of specific, substantial, and imminent threats such as terrorist attacks.
  • (iii) Localisation or identification of a person suspected or convicted of serious crimes listed in the Annex to the Framework Decision.

If border RTBI is used outside these exceptions — for example, systematic RTBI screening of all travellers entering a public border zone — it is prohibited under Art 5, irrespective of the §7 high-risk classification. The prohibition is a ceiling; high-risk classification assumes the use is lawful under the prohibition.

Deployer obligations (competent authorities)

Conduct FRIA before deployment

Public bodies deploying high-risk AI must complete a Fundamental Rights Impact Assessment under Art 27(1). All competent public authorities operating immigration and border AI are in scope. The FRIA must document the impact on Charter rights — including asylum rights, non-discrimination, and right to judicial protection.

Art 27

Assign genuinely accountable human oversight

Trained case officers must have meaningful ability to override AI outputs. Training must include instruction on the AI's known biases, performance limitations, and the correct procedures for challenging AI-driven recommendations. Nominal sign-off on AI outputs without genuine review is not compliant with Art 14 or Art 26(2).

Art 26(2)

Ensure individuals are informed that AI is used in their case

Where a high-risk AI system assists in processing an individual's application or assessing their risk, the individual must be notified that AI is being used — unless this would jeopardise law enforcement objectives. Art 26(11) applies to deployers where not already done by the provider.

Art 26(11)

Retain logs and comply with access rights

Logs must be retained for at least 6 months. For §7 systems involving law enforcement or border control, Art 78 provides that technical documentation may remain within the premises of those authorities — but market surveillance authorities with appropriate clearance must be able to access it on request.

Art 26(6)

Report serious incidents

Serious incidents involving §7 AI — including systematic processing errors, discriminatory outcomes, or security compromises — must be reported to the provider and the relevant market surveillance authority under Art 73. For AI affecting fundamental rights, incident reports trigger regulatory scrutiny.

Art 73
PROPOSAL — not yet enacted law

COM(2025) 836: Deadline extended to 2 December 2027 (Annex III systems)

COM(2025) 836 proposes a new Art 113 point (d) that delays substantive high-risk AI obligations (Chapter III Sections 1–3) for all Annex III systems — including §7 immigration and border AI.

Current law deadline

2 August 2026 (4 months away)

General AI Act application — already enacted

Proposed fallback deadline

2 December 2027 (20 months away)

COM(2025) 836 — pending agreement

Common grey areas in immigration AI classification

  • • AI for travel document verification (not §7 — explicitly excluded): checking the authenticity of a passport chip vs. identifying the person
  • • AI used in airport security queues by private security contractors: depends on whether acting on behalf of a competent public authority
  • • Predictive analytics for workforce planning in border agencies (not §7 — no individual risk assessment)
  • • AI that screens language in asylum claims for internal consistency — likely §7(c) if it influences credibility assessment
Verify your classification — free

Frequently asked questions

Which immigration and border AI systems are high-risk under Annex III §7?
Annex III §7 covers four categories: (a) AI used by or on behalf of competent public authorities as polygraphs or similar tools; (b) AI to assess a risk — including security risk, risk of irregular migration, or health risk — posed by a natural person who intends to enter or has entered a Member State; (c) AI to assist in examining applications for asylum, visa, or residence permits and associated complaints, including assessing the reliability of evidence; and (d) AI for detecting, recognising, or identifying natural persons in the context of migration, asylum, or border control — except verification of travel documents. All four categories are high-risk when used by or on behalf of competent public authorities or by Union institutions, bodies, offices, or agencies.
Is a notified body required for immigration and border AI?
For most §7 systems, Article 43(2) applies: providers must follow the self-assessment procedure based on internal control as referred to in Annex VI. No notified body is required. However, an important crossover exists: if a §7 system also constitutes a biometric AI system under Annex III §1, Article 43(1) applies instead, requiring either notified body involvement or — specifically for systems intended for use by immigration or asylum authorities — the market surveillance authority acts as the notified body.
Who is the 'provider' for immigration AI — the government or the software vendor?
The EU AI Act provider is the entity that develops and places the AI system on the market or puts it into service. In immigration and border contexts, this is typically a government authority or agency that develops the AI in-house, or — more commonly — a technology company that builds and supplies an AI system to immigration or border agencies. Where a competent public authority deploys but does not develop the AI, it is a deployer under Art 26. Where the public authority develops AI for its own use only, it may be both provider and deployer.
How does Article 5(1)(h) — real-time biometric identification — overlap with §7 border AI?
Article 5(1)(h) prohibits the use of real-time remote biometric identification (RTBI) systems in publicly accessible spaces by law enforcement — subject to narrow exceptions. One of the exceptions in Art 5(1)(h)(iii) covers RTBI for the purpose of targeted searches in connection with serious crimes. At borders, competent authorities may also need to identify suspects. The §7(d) category (detecting/recognising/identifying persons in migration/asylum/border context) can overlap with RTBI use. However, §7(d) explicitly excludes verification of travel documents — that is permitted without high-risk classification. Where RTBI is used at a border in a way that falls within Art 5(1)(h), the strict prohibition and narrow exceptions apply concurrently with §7.
When must immigration AI comply with the EU AI Act, and what does COM(2025) 836 propose?
Under current law, the general AI Act application date is 2 August 2026, so Annex III §7 systems must comply from that date. COM(2025) 836 proposes a new Art 113 point (d) that delays the substantive Chapter III Sections 1–3 obligations for Annex III systems: 6 months after a Commission decision confirming adequate compliance support, or — as a fallback — 2 December 2027. This proposal has not yet been enacted and is subject to Council and Parliament agreement.

Related compliance guides

Law Enforcement AI — Art 5 Prohibitions & Annex III §6Biometric AI — Annex III §1 & RTBI ProhibitionFundamental Rights Impact Assessment (Art 27)Human Oversight Requirements (Art 14)Conformity Assessment (Art 43 Self-Assessment Route)COM(2025) 836 — Deadline Changes Overview

Check your immigration AI in 30 seconds

Regumatrix maps your system against §7(a)–(d), checks for Art 5 prohibitions and §1 biometric crossover, identifies your conformity route, and produces a cited compliance report — free to start.

Start free — 3 analyses included