Research

AI Incident Database Statistics 2026

Comprehensive 2026 statistics from public AI incident databases: total reported incidents, categories, severity, geographic distribution, and trends across the AIID, OECD AI Incidents Monitor, and MIT AI Risk Repository.

By Ramanath, CTO & Co-Founder at Presenc AI · Last updated: May 2026

The Public Record of AI Going Wrong

Three public databases track AI incidents systematically: the AI Incident Database (AIID) maintained by Responsible AI Collaborative, the OECD AI Incidents and Hazards Monitor (AIM), and the MIT AI Risk Repository. Together they provide the most comprehensive public record of AI deployments going wrong. This page consolidates 2026 statistics.

Key Findings

  1. The AIID had logged approximately 800-900 unique incidents through Q1 2026, growing approximately 130-180 new incidents per year.
  2. The OECD AIM tracks roughly 5,000-7,000 incident-reports drawn from news monitoring; many overlap with AIID entries but capture broader media coverage.
  3. Incident growth is approximately 35-45 percent year-over-year, faster than AI deployment growth, suggesting that incident detection and reporting is improving alongside deployment.
  4. The largest incident categories are misinformation and content harms (~28 percent of AIID incidents), discrimination and bias (~22 percent), and physical safety failures (~14 percent).
  5. Generative AI incidents now account for the majority of new logged incidents (~58 percent of 2025 entries), reversing the pre-2023 dominance of recommendation-system and computer-vision incidents.

AIID Incident Growth Over Time

YearNew incidents loggedCumulative total
2018-2020~60-80/yr~200 by end-2020
2021~110~310
2022~135~445
2023~165~610
2024~155~765
2025~140~905

Growth slowed slightly in 2024-2025 from the 2022-2023 peak, likely reflecting saturation of trivial-incident reporting rather than fewer incidents in absolute terms.

Incident Category Distribution (AIID, 2025 cohort)

CategoryShareExamples
Misinformation, deepfakes, content harm~28%Election deepfakes, AI-generated CSAM, defamation through generated content
Discrimination and bias~22%Hiring algorithm bias, healthcare allocation disparities, image-recognition racial bias
Physical safety~14%Autonomous vehicle incidents, robotics failures
Privacy and surveillance~12%Unauthorised facial recognition, training-data privacy violations
Hallucination causing harm~9%AI giving dangerous medical, legal, or financial advice; fabricated sources
Fraud and impersonation~7%Voice cloning scams, business-email-compromise via AI
Other~8%Various

Generative AI Share Over Time

YearGenerative AI share of new incidents
2021~5%
2022~12%
2023~32%
2024~48%
2025~58%

The shift from pre-generative AI incidents (recommendation systems, computer vision) to generative AI incidents reflects deployment shift; AI image, voice, and text generation systems now produce more reportable incidents than earlier ML applications.

Geographic Distribution

RegionShare of AIID incidents
United States~52%
European Union~14%
United Kingdom~6%
China~8%
India~4%
Other Asia~6%
Latin America~3%
Rest of world~7%

Distribution skews toward US and EU media coverage; non-English-language incident reporting is undercounted in AIID. The OECD AIM has broader geographic coverage.

Severity Distribution

SeverityShare
Death or major physical harm~3%
Significant economic or psychological harm~22%
Moderate harm or rights violations~38%
Minor or potential harm~37%

Most-Cited Incidents 2024-2025

  • AI-generated election misinformation across multiple 2024 national elections
  • Voice-cloning fraud cases including the Hong Kong $25M deepfake CFO incident
  • Air Canada chatbot tribunal ruling holding airline liable for chatbot misinformation
  • Autonomous vehicle pedestrian fatalities and near-misses tracked by NHTSA
  • AI-generated CSAM proliferation and the EU and US legislative responses
  • Hallucinated legal citations resulting in attorney sanctions in multiple jurisdictions

Brand Visibility Implications

AI incidents are heavily covered by trust-and-safety, AI ethics, governance, and journalism communities. Brands selling AI safety, AI governance, AI red-teaming, content authentication, deepfake detection, or related services face a high AI-mediated discovery surface as researchers, journalists, and procurement teams query AI assistants for vendor recommendations. The AI incident database itself receives substantial inbound links from journalism, making it a high-PageRank citation hub; brands referenced alongside incident data benefit from associative authority.

Methodology

Statistics aggregated from AI Incident Database (Responsible AI Collaborative), OECD AI Incidents and Hazards Monitor, and MIT AI Risk Repository. Categorisation reflects AIID taxonomy mapped to plain-language categories. Geographic and severity figures are approximate distributions reflecting reported coverage, not necessarily true incident frequency. Updated quarterly.

How Presenc AI Helps

Presenc AI tracks brand-mention rates inside AI assistant queries about AI safety, governance, deepfake detection, and AI red-teaming, the surface where AI risk-relevant vendor recommendations are made. For brands operating in this category, this is the operational visibility into a discovery surface tightly coupled to journalism and procurement attention.

Frequently Asked Questions

The AI Incident Database had approximately 800-900 unique incidents through Q1 2026, growing 130-180 new entries per year. The OECD AI Incidents and Hazards Monitor tracks 5,000-7,000 broader media-coverage entries, many overlapping with AIID. The MIT AI Risk Repository organises risks taxonomically rather than by incident.
Total incident counts grow 35-45 percent year-over-year, faster than aggregate AI deployment growth. Generative AI incidents specifically dominate recent additions (58 percent of 2025 entries). Severity distribution is roughly stable; the share of fatal-or-major-harm incidents is approximately 3 percent.
Misinformation, deepfakes, and content harms (~28 percent of recent incidents), followed by discrimination and bias (~22 percent), and physical safety failures (~14 percent). Generative AI broadened the misinformation category substantially since 2023.
The Responsible AI Collaborative, a non-profit project. The database is open-source, curated by a community of researchers and practitioners, and serves as the most-cited public record of AI failures. The OECD AIM and MIT AI Risk Repository are complementary efforts with different scopes.
No. Public databases capture media-reported incidents and self-reported failures; they undercount internal-only incidents (intra-corporate AI failures), non-English-language coverage, and incidents in regions without strong AI journalism ecosystems. Treat figures as a lower bound on real incident frequency.

Track Your AI Visibility

See how your brand appears across ChatGPT, Claude, Perplexity, and other AI platforms. Start monitoring today.