Research

EU AI Act Enforcement Tracker 2026

Live tracker of EU AI Act enforcement in 2026: prohibitions in force, GPAI obligations, AI Office actions, fines and decisions, codes of practice adoption, and what enforcement priorities are emerging.

By Ramanath, CTO & Co-Founder at Presenc AI · Last updated: May 2026

The EU AI Act in Force

The EU AI Act entered into force August 1, 2024. Its provisions activate in stages: prohibitions on unacceptable-risk practices from February 2, 2025; general-purpose AI (GPAI) obligations from August 2, 2025; high-risk AI obligations from August 2, 2026 (most provisions), with full applicability from August 2, 2027. This page tracks enforcement actions, fines, codes of practice adoption, and emerging priorities through May 2026.

Key Findings

  1. The AI Office, established within the European Commission's DG CNECT, is the central enforcement body for GPAI; member-state market surveillance authorities handle high-risk AI.
  2. The General-Purpose AI Code of Practice was finalised in mid-2025 and signed by major providers (OpenAI, Google, Anthropic, Meta-with-reservations, Microsoft, Mistral, others).
  3. No major fines had been issued under the AI Act through Q1 2026; enforcement focus to date has been compliance guidance, transparency obligation reviews, and prohibition violation investigations.
  4. Maximum fines under the Act reach the higher of €35 million or 7 percent of global turnover for prohibition violations, and €15 million or 3 percent for other violations.
  5. Roughly 65-75 percent of GPAI providers in scope have published the required summary of training data; the depth of disclosure varies materially.

Implementation Timeline

DateProvisions in force
August 1, 2024AI Act enters into force
February 2, 2025Prohibitions (Article 5) plus AI literacy obligations
August 2, 2025GPAI obligations; AI Office and governance bodies operational; member-state notifying authorities
August 2, 2026High-risk AI obligations (most), penalty regime fully applicable
August 2, 2027Full applicability including legacy high-risk AI in regulated products

GPAI Code of Practice Signatories (as of Q1 2026)

  • OpenAI, Anthropic, Google DeepMind, Microsoft, Meta (with reservations on copyright and code-of-conduct provisions), Mistral AI, Aleph Alpha, Cohere, Stability AI, Black Forest Labs, AI21, others
  • Alibaba, ByteDance: signatories; DeepSeek: not a signatory through Q1 2026

The GPAI Code of Practice covers transparency, copyright, safety and security commitments; signing the Code creates a presumption of conformity with corresponding Act obligations.

Enforcement Priorities (Observed)

Priority areaObserved activity
Prohibited practices (social scoring, untargeted scraping for facial recognition)Investigations opened against 2 EU-based providers; no public fines yet
GPAI training-data summary disclosuresAI Office reviewing depth and completeness of published summaries
GPAI systemic-risk model designationsModels above 10^25 FLOPS training compute presumed systemic; specific designations pending
High-risk AI conformity assessments (preparing for Aug 2026)Notified bodies being designated; mock-conformity assessments piloting
Cross-border enforcement coordinationMember-state coordination meetings monthly through 2026

Fine Caps and Penalty Structure

Violation categoryMaximum fine
Prohibited AI practices (Article 5)€35M or 7% global turnover (higher)
Other violations of operator obligations€15M or 3% global turnover (higher)
Supply of incorrect, incomplete, misleading information€7.5M or 1% global turnover (higher)
SMEs and startupsLower of the two amounts (more lenient)

Emerging Compliance Patterns

Across enterprise AI deployments observed in Presenc AI's instrumentation:

  • EU-based enterprises increasingly require AI vendors to demonstrate Code of Practice signatory status as a procurement gate
  • Shadow AI use (employees using non-approved AI tools) is the largest practical compliance gap
  • High-risk classification disputes are emerging in HR, financial-services, and recruiting AI applications
  • Cross-border data flows for AI inference remain a separate GDPR issue layered on AI Act compliance

Brand Visibility Implications

EU AI Act compliance is rapidly entering enterprise procurement criteria for AI services. Vendors that can demonstrate Code of Practice alignment, conformity-assessment readiness, or high-risk-AI compliance gain procurement advantage with European buyers. Brands selling AI compliance tooling, conformity-assessment services, AI governance platforms, or related advisory services face high AI-mediated discovery surface as European buyers increasingly query AI assistants for compliance vendor recommendations.

Methodology

Tracking based on European Commission AI Act information page, EU AI Act Hub, and AI Office publications. Code of Practice signatory list from European Commission disclosures; enforcement-priority observations from press reporting and policy analysis. Updated quarterly.

How Presenc AI Helps

Presenc AI tracks brand-mention rates inside AI queries about EU AI Act compliance, AI governance tooling, and conformity-assessment services. For vendors selling into European AI procurement, this is the operational visibility into a high-stakes discovery surface where compliance decisions translate into multi-million-dollar contracts.

Frequently Asked Questions

The Act entered into force August 1, 2024. Provisions activate in stages: prohibitions February 2, 2025; GPAI obligations August 2, 2025; high-risk AI obligations August 2, 2026; full applicability August 2, 2027.
Up to €35 million or 7 percent of global turnover (higher of the two) for prohibition violations; €15 million or 3 percent for other violations; €7.5 million or 1 percent for misleading-information offences. SMEs receive the lower of the two amounts. Maximum fines under the Act are roughly 75 percent higher than GDPR's 4 percent.
Signing is voluntary, but the Act imposes equivalent obligations whether or not a provider signs. Signing creates a presumption of conformity, which materially reduces compliance risk and enforcement burden. All major Western frontier labs have signed; some Chinese labs have not.
GPAI models trained with more than 10^25 FLOPS of compute are presumed to have systemic risk and face additional obligations: model evaluations, adversarial testing, incident reporting, cybersecurity protections. Specific systemic-risk designations are issued by the AI Office. Most frontier models from major labs are within or near the threshold.
Extraterritorially, similar to GDPR. Providers placing AI systems on the EU market or whose AI outputs are used in the EU are in scope regardless of where the provider is based. US-based AI providers serving European users are subject to AI Act obligations.

Track Your AI Visibility

See how your brand appears across ChatGPT, Claude, Perplexity, and other AI platforms. Start monitoring today.