Comparison

AI Policy Template for Companies

Free template for an internal AI use policy. Covers approved tools, data handling, prohibited use, training data, customer-facing AI, and review cadence.

By Ramanath, CTO & Co-Founder at Presenc AI · Last updated: May 15, 2026

Why You Need an Internal AI Policy

By mid-2026 most companies have employees using AI tools daily, whether sanctioned or not. A clear internal AI policy protects sensitive data, sets approved-tool boundaries, and reduces the chance of an embarrassing AI-related incident landing in the news. This template is structured for a typical 50-5,000-employee company and can be cut down or expanded as needed.

Section 1: Approved Tools

List the specific AI tools employees are permitted to use, and the data class each can be used on:

Tier 1 (Public data only, e.g., marketing copy, public research):
  - ChatGPT (Free / Plus)
  - Claude (Free / Pro)
  - Gemini (Free / Advanced)
  - Perplexity (Free / Pro)

Tier 2 (Internal data, no PII or confidential customer info):
  - ChatGPT Team (with data retention disabled)
  - Claude for Work
  - Gemini for Workspace
  - Microsoft Copilot (E3/E5 licences)

Tier 3 (Sensitive internal data, customer PII allowed with role-based access):
  - Approved enterprise instance: [specify]
  - Azure OpenAI deployment: [specify]
  - On-prem LLM: [specify]

Prohibited:
  - DeepSeek, Qwen, and other models hosted outside approved jurisdictions
  - Any AI tool not on this list, without written approval from CTO or CISO

Section 2: Data Handling Rules

  1. No customer PII in Tier 1 or Tier 2 tools. Names, emails, account numbers, contracts, and similar must only go to Tier 3.
  2. No source code with secrets. Strip API keys, credentials, and proprietary algorithms before prompting any external AI.
  3. No HR or legal matters outside Tier 3 tools with explicit approval from HR / Legal counsel.
  4. Output review. Treat AI output as a first draft. Verify facts, quotes, citations, and code before using in customer-facing or production work.
  5. Logging. All Tier 3 AI prompts and responses are logged for audit purposes. Employees consent to this by using Tier 3 tools.

Section 3: Prohibited Use

  • Generating content for malicious purposes (phishing, fraud, deepfakes of real people without consent).
  • Using AI to circumvent company policies (HR processes, code review, security review).
  • Using AI to write performance reviews of subordinates or peers.
  • Using AI in any way that violates customer contracts (data residency, no-training clauses, etc.).
  • Using AI to make hiring, firing, or compensation decisions without human review.

Section 4: Customer-Facing AI

  1. Any customer-facing AI feature must be reviewed by Product, Legal, and Security before launch.
  2. Disclose AI use to customers when AI is a meaningful part of the experience.
  3. Maintain a complaint and escalation path for AI errors that affect customers.
  4. Monitor for hallucinations using a sampled-review process at minimum monthly cadence.
  5. Do not represent AI output as human-generated to customers.

Section 5: Training Data

  • Company data may not be used to train external AI models without explicit approval from CTO and Legal.
  • If a Tier 1 or Tier 2 tool's terms of service allow training on inputs, that tool may not be used with sensitive data even if technically Tier 2.
  • Internal models may be trained on internal data following the data-classification rules in Section 2.
  • All training datasets are reviewed for licensing, PII, and compliance before use.

Section 6: Review and Updates

  1. This policy is reviewed quarterly by the AI Governance Committee (CTO, CISO, Legal, HR, one Product lead).
  2. Material updates are communicated to all employees within five business days.
  3. Employees agreeing to use Tier 3 tools must complete an annual AI use training session.
  4. Incidents are reported to the AI Governance Committee and documented.

Adoption Checklist

  1. Customise the approved-tools list to your actual stack.
  2. Get CTO, CISO, Legal, and HR sign-off before rollout.
  3. Publish in the employee handbook and require acknowledgement.
  4. Run a 30-minute company-wide training session at launch.
  5. Set a calendar reminder for quarterly review.

Frequently Asked Questions

2-4 pages is typical for small and mid-sized companies. Enterprises with regulated workloads (financial services, healthcare) often run to 8-12 pages with appendices on data residency, model bias, and audit logging. Avoid going longer; policies that nobody reads are not enforceable.
Yes. A generic "use approved AI tools" line is unenforceable. List the actual tools by tier so employees know what is allowed where. Update the list every quarter to reflect new tools and changing risk profiles.
For companies above ~100 employees, yes. The committee does not need to be large — CTO, CISO, Legal, HR, and one Product lead is enough. It owns the quarterly review, incident response, and approval of new tools or new use cases.
A combination of (1) acknowledgement-based onboarding (employees sign at hire and annually), (2) DLP / CASB blocking of prohibited tools at the network layer, (3) audit logs on Tier 3 tools, and (4) clear incident response for violations. Pure honour-system policies do not work.

Track Your AI Visibility

See how your brand appears across ChatGPT, Claude, Perplexity, and other AI platforms. Start monitoring today.