The Public Record of AI Going Wrong
Three public databases track AI incidents systematically: the AI Incident Database (AIID) maintained by Responsible AI Collaborative, the OECD AI Incidents and Hazards Monitor (AIM), and the MIT AI Risk Repository. Together they provide the most comprehensive public record of AI deployments going wrong. This page consolidates 2026 statistics.
Key Findings
- The AIID had logged approximately 800-900 unique incidents through Q1 2026, growing approximately 130-180 new incidents per year.
- The OECD AIM tracks roughly 5,000-7,000 incident-reports drawn from news monitoring; many overlap with AIID entries but capture broader media coverage.
- Incident growth is approximately 35-45 percent year-over-year, faster than AI deployment growth, suggesting that incident detection and reporting is improving alongside deployment.
- The largest incident categories are misinformation and content harms (~28 percent of AIID incidents), discrimination and bias (~22 percent), and physical safety failures (~14 percent).
- Generative AI incidents now account for the majority of new logged incidents (~58 percent of 2025 entries), reversing the pre-2023 dominance of recommendation-system and computer-vision incidents.
AIID Incident Growth Over Time
| Year | New incidents logged | Cumulative total |
|---|---|---|
| 2018-2020 | ~60-80/yr | ~200 by end-2020 |
| 2021 | ~110 | ~310 |
| 2022 | ~135 | ~445 |
| 2023 | ~165 | ~610 |
| 2024 | ~155 | ~765 |
| 2025 | ~140 | ~905 |
Growth slowed slightly in 2024-2025 from the 2022-2023 peak, likely reflecting saturation of trivial-incident reporting rather than fewer incidents in absolute terms.
Incident Category Distribution (AIID, 2025 cohort)
| Category | Share | Examples |
|---|---|---|
| Misinformation, deepfakes, content harm | ~28% | Election deepfakes, AI-generated CSAM, defamation through generated content |
| Discrimination and bias | ~22% | Hiring algorithm bias, healthcare allocation disparities, image-recognition racial bias |
| Physical safety | ~14% | Autonomous vehicle incidents, robotics failures |
| Privacy and surveillance | ~12% | Unauthorised facial recognition, training-data privacy violations |
| Hallucination causing harm | ~9% | AI giving dangerous medical, legal, or financial advice; fabricated sources |
| Fraud and impersonation | ~7% | Voice cloning scams, business-email-compromise via AI |
| Other | ~8% | Various |
Generative AI Share Over Time
| Year | Generative AI share of new incidents |
|---|---|
| 2021 | ~5% |
| 2022 | ~12% |
| 2023 | ~32% |
| 2024 | ~48% |
| 2025 | ~58% |
The shift from pre-generative AI incidents (recommendation systems, computer vision) to generative AI incidents reflects deployment shift; AI image, voice, and text generation systems now produce more reportable incidents than earlier ML applications.
Geographic Distribution
| Region | Share of AIID incidents |
|---|---|
| United States | ~52% |
| European Union | ~14% |
| United Kingdom | ~6% |
| China | ~8% |
| India | ~4% |
| Other Asia | ~6% |
| Latin America | ~3% |
| Rest of world | ~7% |
Distribution skews toward US and EU media coverage; non-English-language incident reporting is undercounted in AIID. The OECD AIM has broader geographic coverage.
Severity Distribution
| Severity | Share |
|---|---|
| Death or major physical harm | ~3% |
| Significant economic or psychological harm | ~22% |
| Moderate harm or rights violations | ~38% |
| Minor or potential harm | ~37% |
Most-Cited Incidents 2024-2025
- AI-generated election misinformation across multiple 2024 national elections
- Voice-cloning fraud cases including the Hong Kong $25M deepfake CFO incident
- Air Canada chatbot tribunal ruling holding airline liable for chatbot misinformation
- Autonomous vehicle pedestrian fatalities and near-misses tracked by NHTSA
- AI-generated CSAM proliferation and the EU and US legislative responses
- Hallucinated legal citations resulting in attorney sanctions in multiple jurisdictions
Brand Visibility Implications
AI incidents are heavily covered by trust-and-safety, AI ethics, governance, and journalism communities. Brands selling AI safety, AI governance, AI red-teaming, content authentication, deepfake detection, or related services face a high AI-mediated discovery surface as researchers, journalists, and procurement teams query AI assistants for vendor recommendations. The AI incident database itself receives substantial inbound links from journalism, making it a high-PageRank citation hub; brands referenced alongside incident data benefit from associative authority.
Methodology
Statistics aggregated from AI Incident Database (Responsible AI Collaborative), OECD AI Incidents and Hazards Monitor, and MIT AI Risk Repository. Categorisation reflects AIID taxonomy mapped to plain-language categories. Geographic and severity figures are approximate distributions reflecting reported coverage, not necessarily true incident frequency. Updated quarterly.
How Presenc AI Helps
Presenc AI tracks brand-mention rates inside AI assistant queries about AI safety, governance, deepfake detection, and AI red-teaming, the surface where AI risk-relevant vendor recommendations are made. For brands operating in this category, this is the operational visibility into a discovery surface tightly coupled to journalism and procurement attention.