Why You Need an Internal AI Policy
By mid-2026 most companies have employees using AI tools daily, whether sanctioned or not. A clear internal AI policy protects sensitive data, sets approved-tool boundaries, and reduces the chance of an embarrassing AI-related incident landing in the news. This template is structured for a typical 50-5,000-employee company and can be cut down or expanded as needed.
Section 1: Approved Tools
List the specific AI tools employees are permitted to use, and the data class each can be used on:
Tier 1 (Public data only, e.g., marketing copy, public research):
- ChatGPT (Free / Plus)
- Claude (Free / Pro)
- Gemini (Free / Advanced)
- Perplexity (Free / Pro)
Tier 2 (Internal data, no PII or confidential customer info):
- ChatGPT Team (with data retention disabled)
- Claude for Work
- Gemini for Workspace
- Microsoft Copilot (E3/E5 licences)
Tier 3 (Sensitive internal data, customer PII allowed with role-based access):
- Approved enterprise instance: [specify]
- Azure OpenAI deployment: [specify]
- On-prem LLM: [specify]
Prohibited:
- DeepSeek, Qwen, and other models hosted outside approved jurisdictions
- Any AI tool not on this list, without written approval from CTO or CISO
Section 2: Data Handling Rules
- No customer PII in Tier 1 or Tier 2 tools. Names, emails, account numbers, contracts, and similar must only go to Tier 3.
- No source code with secrets. Strip API keys, credentials, and proprietary algorithms before prompting any external AI.
- No HR or legal matters outside Tier 3 tools with explicit approval from HR / Legal counsel.
- Output review. Treat AI output as a first draft. Verify facts, quotes, citations, and code before using in customer-facing or production work.
- Logging. All Tier 3 AI prompts and responses are logged for audit purposes. Employees consent to this by using Tier 3 tools.
Section 3: Prohibited Use
- Generating content for malicious purposes (phishing, fraud, deepfakes of real people without consent).
- Using AI to circumvent company policies (HR processes, code review, security review).
- Using AI to write performance reviews of subordinates or peers.
- Using AI in any way that violates customer contracts (data residency, no-training clauses, etc.).
- Using AI to make hiring, firing, or compensation decisions without human review.
Section 4: Customer-Facing AI
- Any customer-facing AI feature must be reviewed by Product, Legal, and Security before launch.
- Disclose AI use to customers when AI is a meaningful part of the experience.
- Maintain a complaint and escalation path for AI errors that affect customers.
- Monitor for hallucinations using a sampled-review process at minimum monthly cadence.
- Do not represent AI output as human-generated to customers.
Section 5: Training Data
- Company data may not be used to train external AI models without explicit approval from CTO and Legal.
- If a Tier 1 or Tier 2 tool's terms of service allow training on inputs, that tool may not be used with sensitive data even if technically Tier 2.
- Internal models may be trained on internal data following the data-classification rules in Section 2.
- All training datasets are reviewed for licensing, PII, and compliance before use.
Section 6: Review and Updates
- This policy is reviewed quarterly by the AI Governance Committee (CTO, CISO, Legal, HR, one Product lead).
- Material updates are communicated to all employees within five business days.
- Employees agreeing to use Tier 3 tools must complete an annual AI use training session.
- Incidents are reported to the AI Governance Committee and documented.
Adoption Checklist
- Customise the approved-tools list to your actual stack.
- Get CTO, CISO, Legal, and HR sign-off before rollout.
- Publish in the employee handbook and require acknowledgement.
- Run a 30-minute company-wide training session at launch.
- Set a calendar reminder for quarterly review.