Armadillo
← EU AI Act Hub · Requirements

High-Risk AI Systems

What makes an AI system high-risk under the EU AI Act, and what requirements apply.

· 7 min read

What is high-risk AI?

Under the EU AI Act, high-risk AI systems are those that pose significant potential for harm to health, safety, or fundamental rights.

High-risk AI faces the strictest requirements in the regulation (short of outright prohibition).

The key question: Does the AI system’s purpose or application area fall into one of the high-risk categories defined in the regulation?

Categories

The EU AI Act defines high-risk AI in two ways:

1. Product safety legislation (Annex I)

AI systems that are safety components of products covered by EU harmonization legislation:

  • Machinery
  • Toys
  • Lifts
  • Medical devices
  • In vitro diagnostic medical devices
  • Civil aviation
  • Motor vehicles
  • Railway systems
  • Marine equipment
  • Radio equipment

2. Specific use cases (Annex III)

AI systems used in these areas:

Biometrics

  • Remote biometric identification
  • Biometric categorization
  • Emotion recognition

Critical infrastructure

  • AI managing essential services (water, gas, electricity, traffic)

Education and vocational training

  • Determining access to education
  • Evaluating learning outcomes
  • Assessing appropriate education levels

Employment and worker management

  • Recruitment and selection
  • Performance evaluation
  • Promotion decisions
  • Contract termination
  • Task allocation based on behavior

Access to essential services

  • Credit scoring
  • Life and health insurance assessment
  • Emergency services dispatching

Law enforcement

  • Individual risk assessment
  • Polygraph and emotion detection
  • Evidence analysis
  • Crime prediction
  • Profiling

Migration, asylum, border control

  • Risk assessment
  • Document verification
  • Application examination

Justice and democracy

  • Judicial research
  • Legal interpretation
  • Dispute resolution

Requirements

High-risk AI systems must meet comprehensive requirements:

Risk management system

  • Identify and analyze known and foreseeable risks
  • Estimate and evaluate risks
  • Implement risk mitigation measures
  • Test and validate measures

Data governance

  • Ensure training data relevance
  • Verify data is representative
  • Address bias in data
  • Document data choices

Technical documentation

  • General description
  • Design specifications
  • Monitoring, functioning, control
  • Risk management details
  • Changes made

Record keeping

  • Automatic logging of events
  • Traceability of decisions
  • Retention for appropriate periods

Transparency

  • Clear instructions for use
  • Contact information
  • AI system capabilities and limitations
  • Human oversight requirements

Human oversight

  • Enable human understanding
  • Allow human intervention
  • Support human decision-making
  • Permit stopping the system

Accuracy, robustness, cybersecurity

  • Appropriate levels for intended purpose
  • Resilience to errors and faults
  • Protection against attacks

Common examples

Likely high-risk:

  • AI screening job applicants
  • AI assessing creditworthiness
  • AI grading student exams
  • AI routing emergency calls
  • AI managing power grid load

Likely NOT high-risk:

  • AI chatbots for customer service (unless making decisions about essential services)
  • AI writing assistance (like GitHub Copilot)
  • AI image generation
  • AI translation services
  • AI scheduling meetings

It depends:

  • AI analyzing customer data (depends on decisions it influences)
  • AI in HR systems (depends on what it’s used for)
  • AI in healthcare (depends on clinical role)

How to comply

Step 1: Identify high-risk AI

Review your AI inventory against Annex I and Annex III categories. For each AI system, ask:

  • Is it a safety component in regulated products?
  • Does it fall into one of the specific high-risk use cases?

Step 2: Assess current state

For each high-risk AI system, evaluate:

  • Do you have risk management processes?
  • Is training data documented?
  • Do technical records exist?
  • Is human oversight implemented?

Step 3: Gap analysis

Compare requirements to current state:

  • What documentation is missing?
  • What processes need to be established?
  • What technical changes are required?

Step 4: Implement requirements

Create and execute a plan to:

  • Establish missing processes
  • Create required documentation
  • Implement technical requirements
  • Train responsible staff

Step 5: Prepare for conformity assessment

Depending on the AI system, you may need:

  • Self-assessment (most cases)
  • Notified body assessment (some biometric and critical infrastructure cases)

The deadline for high-risk AI compliance is August 2026. Starting now gives you time to implement proper governance without rushing.