Step 1: Define Your Audit Scope
Before querying a single AI platform, you need to establish what you're measuring. An effective AI visibility audit begins with three inputs: your target keywords (the topics and categories your customers search), your competitor set (the brands you expect to appear alongside), and your platform list (which AI assistants matter for your audience).
Start by listing 20–30 prompts that a prospective customer might type into an AI assistant. These should span the buying journey — from awareness ("what is [category]?") to consideration ("best [category] tools for [use case]") to decision ("compare [brand A] vs [brand B]"). The specificity of your prompt set determines the quality of your audit; generic prompts yield generic insights.
For most B2B brands, the platform list should include at minimum ChatGPT, Claude, Perplexity, and Gemini. B2C brands should add Siri and any vertical-specific assistants relevant to their industry.
Step 2: Run Manual Prompt Tests
With your prompt set defined, systematically query each AI platform. For each prompt, record: whether your brand appears in the response, where it appears (first mention, middle of list, or footnote), how accurately the AI describes your brand, and whether competitors appear alongside you.
Use a consistent format — a spreadsheet works for manual audits. Create columns for the prompt, platform, brand mentioned (yes/no), position, accuracy score (1–5), and competitor mentions. Run each prompt on each platform without being logged into any account to avoid personalization bias.
Critical nuance: run each prompt at least three times on each platform. AI responses are non-deterministic, meaning the same prompt can produce different outputs. If your brand appears in one of three runs, that's inconsistent visibility — very different from appearing in all three.
Step 3: Score Your Visibility Across Six Factors
Raw mention data is useful but insufficient. A proper audit evaluates six distinct visibility factors that together form your GEO score:
| Factor | What It Measures | How to Test |
|---|---|---|
| Knowledge Presence | Does the AI know your brand exists? | Ask "What is [brand]?" directly |
| Semantic Authority | Does the AI associate you with the right topics? | Ask category questions, check if you appear |
| Entity Linking | Can the AI distinguish you from similar names? | Ask about your brand in ambiguous contexts |
| Citations & Mentions | Does the AI cite your content as a source? | Check Perplexity and Gemini source links |
| RAG Fetchability | Can AI crawlers access your site? | Check robots.txt, test live retrieval |
| Contextual Integrity | Is the information about you accurate? | Verify facts, pricing, features in responses |
Rate each factor on a 0–100 scale based on your prompt test results. A brand might score 80 on Knowledge Presence but only 20 on RAG Fetchability if they've blocked AI crawlers.
Step 4: Benchmark Against Competitors
An audit in isolation tells you your absolute visibility. Benchmarking against competitors tells you your relative position — which is what actually determines whether customers find you or a rival.
Run the same prompt set with a focus on tracking which competitors appear and how often. Calculate share of voice: for each prompt, note which brands are mentioned, then calculate the percentage of category-relevant prompts where each brand appears. If your competitor appears in 80% of "best [category] tool" prompts and you appear in 30%, you have a concrete gap to close.
Step 5: Identify Root Causes for Gaps
With scores and benchmarks in hand, diagnose why gaps exist. Common root causes include:
- Thin web presence: Few authoritative third-party mentions, limited Wikipedia or directory presence
- Inconsistent entity data: Your brand name, description, or category differs across sources
- Blocked AI crawlers: robots.txt rules preventing GPTBot, ClaudeBot, or PerplexityBot from accessing your content
- Weak content depth: Surface-level content that doesn't establish topical authority
- Competitor dominance: Rivals have invested heavily in PR, structured data, and authoritative content
Each root cause maps to specific remediation strategies. A robots.txt issue is a 10-minute fix; building semantic authority through content and PR is a 6-month campaign.
Step 6: Automate Ongoing Monitoring
A one-time audit is a snapshot. AI models update their knowledge continuously — Perplexity in real time, ChatGPT and Claude with periodic training updates. Your visibility can shift without warning as models retrain or competitors improve their presence.
This is where Presenc AI transforms the audit from a manual project into a continuous intelligence system. Presenc automates prompt testing across all major AI platforms, tracks your six-factor GEO score over time, monitors competitor visibility, and alerts you to changes. What took days of manual spreadsheet work becomes a live dashboard updated automatically.
Set a cadence for reviewing your AI visibility data — weekly for fast-moving categories, monthly for stable industries. Track trendlines rather than point-in-time scores, and tie visibility changes back to specific actions (content published, PR coverage earned, technical fixes deployed).
Step 7: Build Your Remediation Roadmap
Prioritize fixes by impact and effort. Quick wins like unblocking AI crawlers or fixing inconsistent entity data should come first. Then invest in medium-term plays like earning authoritative citations and building topical content clusters. The audit isn't the end — it's the starting point for a systematic GEO strategy.
Document your baseline scores and set targets for each factor. A realistic 90-day goal might be: increase Knowledge Presence from 40 to 65, improve RAG Fetchability from 10 to 90 (by fixing robots.txt), and raise share of voice from 15% to 30% in your top five category prompts.