AI used in migration risk assessment, asylum and visa application processing, and biometric identification at borders is high-risk under Annex III §7 of the EU AI Act — one of the most fundamental-rights-sensitive categories. Full Chapter III obligations apply to providers and deployers. Self-assessment is available for most §7 systems. COM(2025) 836 proposes extending the compliance deadline to 2 December 2027.
High-risk classification across all four §7 categories — from 2 August 2026
Violations carry the Art 99(4) penalty: up to €15 million or 3% of global annual turnover, whichever is higher. AI used in asylum decisions, visa assessments, or border crossing risk scoring affects fundamental rights — including the right to asylum (Art 18 Charter), non-refoulement, and the right to effective judicial protection (Art 47 Charter). The obligation chain is non-negotiable.
Does your immigration or border AI fall under Annex III §7?
Regumatrix identifies whether your system falls under §7(a)–(d), checks for Art 5 biometric overlap, maps your conformity route, and lists every Chapter III obligation — cited to the exact article — in about 30 seconds.
Check in 30 seconds — 3 free analysesAnnex III §7 — HIGH-RISK (in so far as use is permitted under relevant Union or national law)
AI used by or on behalf of competent public authorities or by Union institutions as polygraphs or similar tools.
AI to assess a risk — including a security risk, a risk of irregular migration, or a health risk — posed by a natural person who intends to enter or who has entered into the territory of a Member State.
AI to assist competent public authorities in examining applications for asylum, visa, or residence permits and for associated complaints — including assessing the reliability of evidence.
AI for detecting, recognising, or identifying natural persons in the context of migration, asylum, or border control management — with the exception of the verification of travel documents.
Key qualifier: used by or on behalf of competent public authorities
All four §7 categories require that the AI is used by or on behalf of competent public authorities or by Union institutions, bodies, offices, or agencies. Private companies providing immigration advisory services that are not acting on behalf of a competent authority are not directly captured. Technology vendors supplying the AI to a public authority are the provider under the AI Act; the authority is the deployer.
In scope — examples
Outside §7 scope
Charter rights directly at stake
Art 18 EU Charter — Right to asylum
AI that assists or influences asylum decisions directly engages the fundamental right to asylum. Errors — including systematic algorithmic errors — can lead to refoulement.
Art 47 EU Charter — Right to effective remedy
The Art 14 human oversight requirement has heightened significance here: individuals must be able to challenge decisions, and human review must be genuinely meaningful — not a rubber-stamp of AI outputs.
Art 21 EU Charter — Non-discrimination
Risk scoring AI must be rigorously validated for discriminatory outcomes across nationality, ethnicity, and religion characteristics. Proxy discrimination through national/regional origin is a key risk.
Non-refoulement (Art 19 Charter, Art 3 ECHR)
AI error in asylum credibility assessment that leads to removal to a country where the person faces serious harm constitutes a breach of the absolute prohibition on refoulement.
The obligations below apply to AI providers under the EU AI Act. Every requirement has heightened practical importance in the immigration and asylum context given the potential irreversible impact of errors on the persons affected.
Risk Management System
Document all foreseeable risks — including the risk of systematic discriminatory outcomes, misclassification of protected characteristics as risk indicators, and adversarial manipulation of risk scores. For asylum AI, the risk of false negatives (failing to identify genuine asylum seekers) must be explicitly assessed as a fundamental rights risk, not just an accuracy metric.
Data Governance
Training data must be carefully examined for historical bias. Immigration and border AI often trains on historical decision data that may reflect past discriminatory enforcement patterns. Data governance must ensure protected characteristics (nationality, ethnicity, religion) do not operate as de facto risk factors through proxy variables.
Technical Documentation (Annex IV)
Must document performance metrics across nationality, gender, and age sub-groups. For asylum interview analysis AI, document the languages and dialects supported and known performance degradation for under-represented languages.
Record-Keeping & Logging
Comprehensive automatic logging is essential for this category. Logs must be retained to enable individuals to understand and challenge decisions made with AI assistance. Note: confidential operational data linked to law enforcement and border functions has special handling rules under Art 78 of the AI Act.
Transparency & Instructions for Use
Deploying authorities must be clearly instructed on the system's limitations, the conditions under which outputs may be unreliable, and the human review procedures required. The instructions must make clear that AI outputs are advisory — not determinative — for individual cases.
Human Oversight
This is the most critical requirement in the §7 context. Human oversight must be substantive — not a rubber-stamp. Qualified case officers must be able to override AI recommendations and must receive meaningful information about the basis of AI outputs. Automated refusal of asylum or visa applications based solely on AI output without genuine human assessment is inconsistent with Art 14.
Accuracy, Robustness & Cybersecurity
Performance must be validated on populations representative of those who will be assessed — diverse nationalities, languages, and socioeconomic backgrounds. Systems assessing re-entry risk or document consistency must be robust against adversarial inputs. Cybersecurity is critical: manipulation of a border risk-scoring system could systematically distort decisions at scale.
Standard route: Art 43(2) — self-assessment for §7 systems
Under Art 43(2), AI systems in Annex III points 2 to 8 — including §7 immigration and border AI — must follow the internal control procedure (Annex VI). No notified body is required for the standard §7 conformity assessment. The provider conducts the assessment, prepares the technical documentation, and draws up the EU Declaration of Conformity before market placement.
Important exception: §7(d) biometric overlap with Annex III §1
Where a §7 system also constitutes a biometric AI system under Annex III §1 (e.g., a facial recognition system used at borders), Art 43(1) applies instead of Art 43(2). Under Art 43(1), where the system is intended for use by immigration or asylum authorities, the market surveillance authority acts as the notified body — not a commercial conformity assessment body. This ensures independent public oversight for the most rights-sensitive biometric identification uses.
Self-assessment route (Annex VI — for non-biometric §7 systems):
Prohibited by default — with narrow exceptions
Art 5(1)(h) prohibits the use of real-time remote biometric identification (RTBI) systems in publicly accessible spaces by law enforcement. This prohibition applies at borders where RTBI is used in real time on persons in publicly accessible border areas.
The narrow exceptions in Art 5(1)(h)(i)–(iii) include:
If border RTBI is used outside these exceptions — for example, systematic RTBI screening of all travellers entering a public border zone — it is prohibited under Art 5, irrespective of the §7 high-risk classification. The prohibition is a ceiling; high-risk classification assumes the use is lawful under the prohibition.
Conduct FRIA before deployment
Public bodies deploying high-risk AI must complete a Fundamental Rights Impact Assessment under Art 27(1). All competent public authorities operating immigration and border AI are in scope. The FRIA must document the impact on Charter rights — including asylum rights, non-discrimination, and right to judicial protection.
Assign genuinely accountable human oversight
Trained case officers must have meaningful ability to override AI outputs. Training must include instruction on the AI's known biases, performance limitations, and the correct procedures for challenging AI-driven recommendations. Nominal sign-off on AI outputs without genuine review is not compliant with Art 14 or Art 26(2).
Ensure individuals are informed that AI is used in their case
Where a high-risk AI system assists in processing an individual's application or assessing their risk, the individual must be notified that AI is being used — unless this would jeopardise law enforcement objectives. Art 26(11) applies to deployers where not already done by the provider.
Retain logs and comply with access rights
Logs must be retained for at least 6 months. For §7 systems involving law enforcement or border control, Art 78 provides that technical documentation may remain within the premises of those authorities — but market surveillance authorities with appropriate clearance must be able to access it on request.
Report serious incidents
Serious incidents involving §7 AI — including systematic processing errors, discriminatory outcomes, or security compromises — must be reported to the provider and the relevant market surveillance authority under Art 73. For AI affecting fundamental rights, incident reports trigger regulatory scrutiny.
COM(2025) 836 proposes a new Art 113 point (d) that delays substantive high-risk AI obligations (Chapter III Sections 1–3) for all Annex III systems — including §7 immigration and border AI.
Current law deadline
2 August 2026 (4 months away)
General AI Act application — already enacted
Proposed fallback deadline
2 December 2027 (20 months away)
COM(2025) 836 — pending agreement
Common grey areas in immigration AI classification
Regumatrix maps your system against §7(a)–(d), checks for Art 5 prohibitions and §1 biometric crossover, identifies your conformity route, and produces a cited compliance report — free to start.
Start free — 3 analyses included