EU AI Act

Last reviewed: 2026-05-11

The EU AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. It is the world’s first horizontal AI statute, taking a risk-based approach that classifies AI systems into four tiers with corresponding obligations.[1]

The Act has been enforced in phases since February 2025. On 7 May 2026, the Council and Parliament reached a provisional agreement on a “Digital Omnibus on AI” that rebases several deadlines and adds new prohibitions. The Omnibus changes are described in detail below; readers should note that the Omnibus is PENDING formal adoption at the time of this writing.[2]

Risk classification

Tier Description Examples
Unacceptable Banned AI practices under Article 5 Government social scoring, manipulative AI exploiting vulnerable groups, untargeted scraping for facial recognition databases, emotion recognition in workplaces and schools, real-time remote biometric identification in public spaces (with narrow law-enforcement exceptions), CSAM/NCII generation (added by Omnibus)
High Systems subject to conformity assessment, technical documentation, risk management, transparency, human oversight AI in education, employment, essential services, law enforcement, migration, justice, biometric categorisation, critical infrastructure, medical devices
Limited Transparency obligations only Chatbots, deepfakes, AI-generated content
Minimal No new obligations beyond existing law Spam filters, AI in video games, most enterprise tooling
EU AI Act risk classification pyramid

Figure: The EU AI Act risk pyramid — Unacceptable (banned, with CSAM/NCII added in the May 2026 Omnibus), High Risk (heavily regulated, deadline rebased to 2 December 2027), Limited Risk (transparency obligations from 2 August 2026), Minimal Risk (largely unregulated).[^babl]

EU AI Act phased enforcement timeline

Figure: EU AI Act enforcement timeline reflecting the 7 May 2026 Digital Omnibus political agreement — high-risk Annex III obligations move from 2 December 2026 to 2 December 2027; Annex I obligations move from 2 August 2027 to 2 August 2028.

Phased timeline (current, including Omnibus)

Date Event Status
1 August 2024 Regulation enters into force In force
2 February 2025 Article 5 prohibitions and AI-literacy obligations (Art. 4) apply In force
4 February 2025 Commission Guidelines on Prohibited AI Practices Published[3]
10 July 2025 GPAI Code of Practice published (Transparency, Copyright, Safety & Security) Published[4]
2 August 2025 GPAI obligations (Arts. 53, 55), AI Office governance, penalties In force
19 November 2025 Commission proposes the Digital Omnibus on AI Published
7 May 2026 Council/Parliament provisional agreement on Omnibus Pending formal adoption
2 August 2026 Article 50(2) synthetic-content marking obligations In force (transition to 2 December 2026)
2 December 2026 Original high-risk Annex III deadline Rebased to 2 December 2027 by Omnibus
2 December 2027 High-risk Annex III obligations (new Omnibus date) Future
2 August 2027 Article 6(1) Annex I obligations (original) Rebased to 2 August 2028 by Omnibus
2 August 2028 Annex I obligations (new Omnibus date) Future

What’s actually live today (May 2026)

Three categories of obligation are enforceable now:

  1. Article 5 prohibitions (since February 2025). Untargeted scraping for facial-recognition databases, social scoring by public authorities, workplace and educational emotion recognition, and the other banned practices in Article 5 must have ceased. Penalties reach EUR 35 million or 7% of global annual turnover, whichever is higher.[5]
  2. AI literacy under Article 4 (since February 2025). Providers and deployers must take measures to ensure sufficient AI literacy among staff dealing with AI systems.
  3. General-purpose AI (GPAI) obligations under Articles 53 and 55 (since August 2025). Providers of GPAI models must maintain technical documentation, publish a sufficiently detailed summary of training content, comply with Union copyright law (including Article 4(3) of the CDSM Directive on text and data mining), and — for models posing systemic risk — conduct adversarial testing, track and report serious incidents, and ensure cybersecurity protection.

The GPAI Code of Practice

Published on 10 July 2025, the EU GPAI Code of Practice is a voluntary instrument intended to demonstrate adherence to the GPAI obligations.[4:1] It has three chapters — Transparency, Copyright, and Safety & Security — and was endorsed on 1 August 2025.

Signatories include Google, Microsoft, OpenAI, and Anthropic. Meta declined to sign, citing concerns about scope and legal uncertainty.

For non-signatories, compliance with the underlying obligations is assessed directly under the Act. The Code is therefore a safe-harbour-like mechanism: signing it does not exempt a provider from the Act, but it provides a structured way to demonstrate compliance and reduces enforcement risk.

See Frontier Models for the substantive obligations the Code operationalises and the parallel CAISI testing agreements in the United States.

The May 2026 Digital Omnibus (PENDING)

On 7 May 2026, the Council and Parliament reached a provisional political agreement on the Digital Omnibus on AI, a Commission proposal published 19 November 2025 to “simplify and streamline” the Act.[2:1] The agreement is pending formal adoption by both institutions before becoming law; the most important changes are:

Practical implication. Organisations that built compliance roadmaps around the August 2026 high-risk deadline now have an additional 16 months. However, the prohibitions, AI-literacy obligations, GPAI duties, and Article 50 synthetic-content marking obligations are unchanged or accelerated. Do not assume the Omnibus means “the Act is delayed”; it means certain high-risk and Annex I duties are delayed while the rest of the Act continues to enforce on its original schedule.

High-risk obligations (Article 6 et seq.)

For systems classified high-risk under Article 6 (whether by inclusion in Annex III or by Article 6(1) Annex I), providers must:

ISO/IEC 42001 alignment substantially supports compliance with Articles 9, 10, and 17 (quality management system requirements); see ISO standards for the standards stack.

Penalties

Three tiers of fines apply under Article 99:[5:1]

Violation Maximum
Prohibited practices (Article 5) EUR 35 million or 7% global turnover
High-risk and most other obligations EUR 15 million or 3% global turnover
Supplying incorrect, incomplete, or misleading information to authorities EUR 7.5 million or 1% global turnover

SMEs and start-ups face proportionate fines — whichever of the percentage or absolute amount is lower, rather than higher.

Enforcement architecture

Three bodies share enforcement:

  1. National competent authorities in each Member State for high-risk and most other obligations.
  2. The AI Office within DG CONNECT for GPAI, cross-border cases, and Code of Practice oversight (in force since August 2025).
  3. The European AI Board for harmonisation and coordination.

Notified bodies, designated under the Act, conduct conformity assessments for high-risk systems requiring third-party evaluation.

Practical compliance checklist (May 2026)


  1. European Commission. Regulatory framework on artificial intelligence. ↩︎

  2. Council of the EU. (2026, May 7). Artificial intelligence: Council and Parliament agree to simplify and streamline rules. ↩︎ ↩︎

  3. European Commission. (2025, February 4). Guidelines on prohibited artificial intelligence practices. ↩︎

  4. EU GPAI Code of Practice. code-of-practice.ai. ↩︎ ↩︎

  5. Artificial Intelligence Act EU. Article 99 — Penalties. ↩︎ ↩︎