Armadillo

What is the EU AI Act?

The world's first comprehensive AI regulation. Here's what it means for your organization.

· 8 min read

What is it?

The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. Adopted in 2024, it establishes rules for AI systems based on their potential risks to health, safety, and fundamental rights.

Think of it like GDPR for AI. Just as GDPR created rules for how organizations handle personal data, the EU AI Act creates rules for how organizations use AI.

Key points:

  • It’s a regulation, not a directive. This means it applies directly in all EU member states.
  • It uses a risk-based approach. Higher-risk AI faces stricter requirements.
  • Significant penalties apply. Up to €35 million or 7% of global turnover.
  • It affects organizations worldwide. If you serve EU customers, you’re likely covered.

Who does it apply to?

The EU AI Act has broad reach:

Providers (developers who create AI systems)

  • Must ensure systems meet requirements before market placement
  • Must implement quality management systems
  • Must conduct conformity assessments for high-risk AI

Deployers (organizations that use AI systems)

  • Must ensure AI is used in compliance with instructions
  • Must implement human oversight
  • Must monitor AI systems and report issues

Geography doesn’t protect you:

  • If your AI system is used in the EU, you’re covered
  • If you’re an EU organization, you’re covered
  • If you process EU citizens’ data with AI, you’re likely covered

Most organizations are deployers. If you use ChatGPT, Copilot, Claude, or other AI tools, you’re a deployer under the EU AI Act.

Risk categories

The EU AI Act categorizes AI by risk level:

Unacceptable Risk (Banned)

These AI systems are prohibited entirely:

  • Social scoring by governments
  • Real-time biometric identification in public spaces (with exceptions)
  • Manipulation of human behavior
  • Exploitation of vulnerabilities

High Risk

Requires strict compliance:

  • AI in critical infrastructure
  • AI for educational or vocational training
  • AI for employment, worker management
  • AI for access to essential services
  • AI for law enforcement
  • AI for migration, asylum, border control
  • AI for justice and democratic processes

Limited Risk

Transparency obligations:

  • Chatbots (must disclose AI nature)
  • Emotion recognition systems
  • Biometric categorization
  • AI-generated content (deepfakes)

Minimal Risk

No specific requirements:

  • AI-enabled video games
  • Spam filters
  • Most general-purpose AI tools

Key requirements

The specific requirements depend on your role and the AI’s risk category, but common requirements include:

For all organizations

Article 4: AI Literacy You must ensure staff who work with AI have sufficient competence to understand the technology, its capabilities, and its risks.

Transparency Users must know when they’re interacting with AI in certain contexts.

For deployers of high-risk AI

Human oversight Humans must be able to understand, monitor, and override AI decisions.

Record keeping You must maintain logs of AI system operation.

Incident reporting Serious incidents must be reported to authorities.

For providers of high-risk AI

Risk management Implement a documented risk management system.

Data governance Ensure training data is relevant, representative, and free from errors.

Documentation Maintain technical documentation of the AI system.

Conformity assessment Demonstrate compliance before market placement.

What should you do?

Step 1: Inventory your AI

You cannot comply with regulations you cannot measure against. Start by documenting:

  • What AI systems you use
  • Who uses them and for what
  • What data they access
  • What decisions they influence

This is where Armadillo helps. Our free audit discovers AI tools across your organization and generates your initial inventory.

Step 2: Classify by risk

For each AI system, determine:

  • Is it prohibited? (Stop using it)
  • Is it high-risk? (Full compliance required)
  • Is it limited risk? (Transparency obligations)
  • Is it minimal risk? (No specific requirements)

Step 3: Implement requirements

Based on classification:

  • Ensure staff competence (all AI)
  • Implement transparency measures (limited risk)
  • Establish human oversight and documentation (high risk)

Step 4: Monitor and maintain

AI governance is ongoing:

  • Monitor for new AI adoption
  • Update documentation as systems change
  • Report incidents as required
  • Prepare for audits

The sooner you start, the easier compliance will be when deadlines arrive.