The EU AI Act in Force
The EU AI Act entered into force August 1, 2024. Its provisions activate in stages: prohibitions on unacceptable-risk practices from February 2, 2025; general-purpose AI (GPAI) obligations from August 2, 2025; high-risk AI obligations from August 2, 2026 (most provisions), with full applicability from August 2, 2027. This page tracks enforcement actions, fines, codes of practice adoption, and emerging priorities through May 2026.
Key Findings
- The AI Office, established within the European Commission's DG CNECT, is the central enforcement body for GPAI; member-state market surveillance authorities handle high-risk AI.
- The General-Purpose AI Code of Practice was finalised in mid-2025 and signed by major providers (OpenAI, Google, Anthropic, Meta-with-reservations, Microsoft, Mistral, others).
- No major fines had been issued under the AI Act through Q1 2026; enforcement focus to date has been compliance guidance, transparency obligation reviews, and prohibition violation investigations.
- Maximum fines under the Act reach the higher of €35 million or 7 percent of global turnover for prohibition violations, and €15 million or 3 percent for other violations.
- Roughly 65-75 percent of GPAI providers in scope have published the required summary of training data; the depth of disclosure varies materially.
Implementation Timeline
| Date | Provisions in force |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Prohibitions (Article 5) plus AI literacy obligations |
| August 2, 2025 | GPAI obligations; AI Office and governance bodies operational; member-state notifying authorities |
| August 2, 2026 | High-risk AI obligations (most), penalty regime fully applicable |
| August 2, 2027 | Full applicability including legacy high-risk AI in regulated products |
GPAI Code of Practice Signatories (as of Q1 2026)
- OpenAI, Anthropic, Google DeepMind, Microsoft, Meta (with reservations on copyright and code-of-conduct provisions), Mistral AI, Aleph Alpha, Cohere, Stability AI, Black Forest Labs, AI21, others
- Alibaba, ByteDance: signatories; DeepSeek: not a signatory through Q1 2026
The GPAI Code of Practice covers transparency, copyright, safety and security commitments; signing the Code creates a presumption of conformity with corresponding Act obligations.
Enforcement Priorities (Observed)
| Priority area | Observed activity |
|---|---|
| Prohibited practices (social scoring, untargeted scraping for facial recognition) | Investigations opened against 2 EU-based providers; no public fines yet |
| GPAI training-data summary disclosures | AI Office reviewing depth and completeness of published summaries |
| GPAI systemic-risk model designations | Models above 10^25 FLOPS training compute presumed systemic; specific designations pending |
| High-risk AI conformity assessments (preparing for Aug 2026) | Notified bodies being designated; mock-conformity assessments piloting |
| Cross-border enforcement coordination | Member-state coordination meetings monthly through 2026 |
Fine Caps and Penalty Structure
| Violation category | Maximum fine |
|---|---|
| Prohibited AI practices (Article 5) | €35M or 7% global turnover (higher) |
| Other violations of operator obligations | €15M or 3% global turnover (higher) |
| Supply of incorrect, incomplete, misleading information | €7.5M or 1% global turnover (higher) |
| SMEs and startups | Lower of the two amounts (more lenient) |
Emerging Compliance Patterns
Across enterprise AI deployments observed in Presenc AI's instrumentation:
- EU-based enterprises increasingly require AI vendors to demonstrate Code of Practice signatory status as a procurement gate
- Shadow AI use (employees using non-approved AI tools) is the largest practical compliance gap
- High-risk classification disputes are emerging in HR, financial-services, and recruiting AI applications
- Cross-border data flows for AI inference remain a separate GDPR issue layered on AI Act compliance
Brand Visibility Implications
EU AI Act compliance is rapidly entering enterprise procurement criteria for AI services. Vendors that can demonstrate Code of Practice alignment, conformity-assessment readiness, or high-risk-AI compliance gain procurement advantage with European buyers. Brands selling AI compliance tooling, conformity-assessment services, AI governance platforms, or related advisory services face high AI-mediated discovery surface as European buyers increasingly query AI assistants for compliance vendor recommendations.
Methodology
Tracking based on European Commission AI Act information page, EU AI Act Hub, and AI Office publications. Code of Practice signatory list from European Commission disclosures; enforcement-priority observations from press reporting and policy analysis. Updated quarterly.
How Presenc AI Helps
Presenc AI tracks brand-mention rates inside AI queries about EU AI Act compliance, AI governance tooling, and conformity-assessment services. For vendors selling into European AI procurement, this is the operational visibility into a high-stakes discovery surface where compliance decisions translate into multi-million-dollar contracts.