Comparison

AI Incident Response Plan Template

Free incident response plan template for AI-specific failures: hallucination harm, bias incidents, data leaks, prompt injection, model degradation. Roles, timelines, and templates.

By Ramanath, CTO & Co-Founder at Presenc AI · Last updated: May 15, 2026

Why an AI-Specific Incident Response Plan

AI-specific incidents (hallucinations causing harm, bias surfacing in production, prompt injection, model degradation, training-data leaks) don't fit neatly into a traditional security incident response plan. They have different blast radius, different remediation paths, and different communication requirements. This template gives you a stand-alone playbook.

Section 1: Roles and Responsibilities

RoleOwnerResponsibilities
Incident CommanderOn-call engineerTriages, declares severity, coordinates response
AI LeadML / Applied AI leadDiagnoses model / prompt / data layer; proposes fixes
Comms LeadPR / CommunicationsExternal communications, customer messaging
Legal LeadIn-house counselRegulatory disclosure, customer notification, liability
Customer TrustCS leadershipDirect customer outreach, escalation handling
Executive SponsorVP-level or aboveApproves customer comms, business decisions

Section 2: Incident Categories

  1. Hallucination Harm: AI output causes a customer or third party material harm (financial, medical, reputational).
  2. Bias Incident: Disparate impact surfaces in production AI decisions or outputs.
  3. Data Leak: Training data, internal prompts, or customer data exposed via AI output or model.
  4. Prompt Injection: External input manipulates the AI system into harmful behaviour.
  5. Model Degradation: Upstream model version change causes silent quality drop.
  6. Vendor Outage: AI vendor outage cascades into our production systems.

Section 3: Severity Levels

SeverityDefinitionResponse timeNotify
SEV-1Active harm to customers or material legal exposureImmediateFull team, exec, legal, board if material
SEV-2Significant degradation, no active harm yetWithin 1 hourFull team, exec sponsor
SEV-3Limited impact, containedWithin 4 hoursOwning team, exec sponsor
SEV-4Internal-only, no customer impactNext business dayOwning team

Section 4: Response Workflow

  1. Detect. Customer report, monitoring alert, or internal observation triggers an incident channel.
  2. Triage. Incident Commander assigns severity and starts the incident log.
  3. Contain. Stop the bleeding: roll back, throttle, or disable the affected AI feature.
  4. Diagnose. AI Lead identifies root cause (model, prompt, data, integration).
  5. Remediate. Apply fix, re-test, restore service.
  6. Communicate. Internal and external comms per the matrix below.
  7. Document. Full incident report within 5 business days.
  8. Learn. Post-incident review within 10 business days. Findings drive policy and monitoring updates.

Section 5: Communications Matrix

SeverityCustomersRegulatorsPublicInternal
SEV-1Within 24h or per lawPer applicable timelineIf material; coordinate with legalReal-time updates
SEV-2Within 72h if affectedIf applicableOptionalDaily updates
SEV-3Optional, if affectedUsually noNoEnd-of-incident summary
SEV-4NoNoNoOwning team only

Section 6: Customer Notification Template

Subject: [Incident notification] [Brief description] — [Date]

We are writing to inform you of an incident that may have affected your account.

What happened:
- [Plain-language description in 2-3 sentences]

When:
- [Start time]
- [End time / "ongoing"]

What we have done:
- [Containment steps]
- [Remediation steps]
- [What is fixed; what is in progress]

What you should do:
- [Specific actions if any]
- [Where to escalate or seek support]

We are continuing to investigate and will share updates at [URL or schedule]. If you have questions, contact [support contact].

Thank you for your patience.

[Signed by accountable executive]

Section 7: Post-Incident Report Template

# Incident Report — [Title]

## Summary
[2-3 sentences]

## Timeline
- [Detect time]: [Event]
- [Containment time]: [Event]
- [Resolution time]: [Event]

## Root Cause
[Technical detail]

## Impact
[Customer / regulatory / brand impact]

## Response
[Actions taken and by whom]

## What Went Well
[Positive findings]

## What Went Wrong
[Negative findings, blameless]

## Action Items
[Owner-tagged remediation list with due dates]

## Lessons Learned
[Policy or monitoring updates triggered]

Frequently Asked Questions

Same response shape; different diagnostic path. Security incidents focus on access, exfiltration, and remediation; AI incidents add model layer (prompt, training data, output), bias dimensions, and explainability. Often the same on-call rotation handles both with augmented playbooks.
Hallucination harm in customer-facing AI (chat assistants, summarisation tools), followed by prompt injection in agentic systems, and bias surfacing in lending / hiring / pricing decisions. Model degradation from upstream version changes is rising as labs deprecate older models faster.
No — only when applicable law requires. The EU AI Act sets thresholds for high-risk systems; sector regulators (FFIEC for banks, FDA for medical devices, FTC for consumer claims) set their own. Have legal review the disclosure decision early; over-disclosing has costs too.
Eval gates pre-launch, monitoring in production (hallucination rate, refusal rate, bias metrics), red-teaming for prompt injection, alerts on model version changes, and a clear incident reporting path so issues surface fast. Most AI incidents in 2026 were preventable with these basics.

Track Your AI Visibility

See how your brand appears across ChatGPT, Claude, Perplexity, and other AI platforms. Start monitoring today.