Research

When AI Calls Your Crypto Project a Scam: Hallucination Data

Research on AI hallucinations in crypto — false scam labels, incorrect exploit attributions, and outdated security information. Includes hallucination rates by project category and platform.

By Ramanath, CTO & Co-Founder at Presenc AI · Last updated: March 2026

AI Is Labeling Legitimate Crypto Projects as Scams

One of the most damaging forms of AI hallucination in the crypto space is the false scam label. When a user asks an AI assistant whether a project is safe, legitimate, or trustworthy, the stakes are uniquely high: a single "this project has been flagged as a scam" response — even if entirely fabricated — can deter investors, users, and partners. This study documents the scope of the problem, analyzing over 8,600 safety-related AI queries about 340 crypto projects to measure hallucination rates, identify patterns, and quantify the damage.

We define a "false scam label" as any AI response that describes a project as a scam, rug pull, fraud, or Ponzi scheme when no credible source supports that characterization. We distinguish this from "outdated security information," where AI accurately describes a past exploit or vulnerability but fails to note that it has been resolved, and from "misattributed exploits," where AI confuses one project's security incident with another.

Hallucination Rates by Project Category

The overall crypto-related hallucination rate across AI platforms is 11.4% — more than double the 4.8% rate we measured for general technology queries. Within crypto, the rates vary dramatically by project category.

Project CategoryFalse Scam Label RateOutdated Security Info RateMisattributed Exploit RateTotal Hallucination Rate
Layer 1 Blockchains2.1%4.3%1.8%8.2%
Layer 2 / Rollups3.4%5.1%2.9%11.4%
DEX Protocols4.8%6.7%4.2%15.7%
Lending Protocols5.2%7.9%5.1%18.2%
Yield Aggregators9.7%8.4%6.3%24.4%
Bridges7.8%9.2%8.1%25.1%
New Token Projects (<1 yr)14.3%3.2%2.1%19.6%
NFT Marketplaces8.1%5.6%3.4%17.1%
Meme Tokens11.2%2.8%1.9%15.9%

Yield aggregators and bridges have the highest total hallucination rates at 24-25%, driven by a combination of frequent real exploits in these categories (which AI models generalize too broadly) and rapid protocol changes that outpace training data. New token projects face the highest false scam label rate at 14.3% — AI models appear to apply a guilty-until-proven-innocent heuristic to projects with limited training data.

Layer 1 blockchains have the lowest hallucination rates, benefiting from extensive, well-maintained documentation and Wikipedia presence. This reinforces the finding from our broader visibility study that entity authority directly correlates with AI accuracy.

Platform-Specific Hallucination Patterns

Each AI platform exhibits distinct hallucination patterns in crypto queries, reflecting differences in training data, safety tuning, and retrieval architecture.

PlatformFalse Scam Label RateOutdated Info RateMisattribution RateRefusal Rate
ChatGPT (GPT-4o)6.8%7.2%4.1%12%
Claude (3.5 Sonnet)3.9%5.8%2.7%28%
Gemini8.4%8.9%5.3%18%
Perplexity2.1%2.4%1.8%4%

Claude has the lowest hallucination rates among non-retrieval models but the highest refusal rate — it frequently declines to assess whether a crypto project is legitimate, instead directing users to do their own research. While this reduces hallucination, it also means projects get zero visibility in safety-related queries on Claude. Gemini has the highest false scam label rate, frequently conflating projects with similar names or attributing category-level risks to specific protocols.

Perplexity again outperforms on accuracy due to real-time retrieval, but it is not immune: 2.1% of its crypto safety assessments still contained false scam labels, typically sourced from outdated or unreliable web pages that ranked well at query time.

The Real-World Impact of False Scam Labels

We surveyed 180 crypto project teams to understand the business impact of AI-generated misinformation. The findings are stark:

  • 34% of surveyed projects reported that at least one investor or partner cited an AI-generated safety concern during due diligence conversations.
  • 22% of projects said they had received support tickets from users who encountered negative AI assessments and wanted reassurance.
  • 8 projects reported measurable TVL decreases that they attributed in part to AI-driven safety concerns, with estimated losses ranging from $2M to $40M.
  • Average correction time for a false scam label on ChatGPT was 4.2 months. On Perplexity, corrections appeared within 1-3 weeks when the source content was updated.

The reputational damage of a false scam label is particularly insidious because users rarely tell you why they decided not to use your protocol. The investor who asked ChatGPT "is [your protocol] safe?" and received a hedged or negative response simply moves on to the next option. The damage is silent and cumulative.

Correction Strategies for Crypto Projects

Projects facing AI-generated false scam labels should pursue a multi-pronged correction strategy. First, publish a dedicated security page on your website that clearly documents your audit history, bug bounty program, and incident response record. This gives retrieval-based AI platforms like Perplexity an authoritative source to cite. Second, ensure your Wikipedia article (if you have one) accurately reflects your security posture and does not contain outdated vulnerability information. Third, seek coverage from reputable crypto media outlets that explicitly affirm your project's legitimacy — AI models weight established media sources heavily. Fourth, use Presenc AI or similar monitoring tools to detect false claims across all platforms and file corrections through each platform's feedback mechanism. Fifth, maintain consistent, positive signal across all web properties: outdated blog posts, abandoned social accounts, and inconsistent information across sources all contribute to AI uncertainty, which models resolve by defaulting to caution.

Frequently Asked Questions

Our research found that 6.2% of AI safety assessments across all platforms contained false scam labels — meaning the AI described a project as a scam, fraud, or rug pull with no credible basis. The rate is highest for new token projects (14.3%) and yield aggregators (9.7%), and lowest for established Layer 1 blockchains (2.1%). The overall crypto hallucination rate including outdated info and misattributed exploits is 11.4%.
Perplexity is the most accurate for crypto safety queries, with a 2.1% false scam label rate and 6.3% total hallucination rate, thanks to its real-time web retrieval. Among base-model AI assistants, Claude has the lowest hallucination rate (3.9% false scam labels) but also the highest refusal rate (28%), meaning it often declines to assess project safety at all.
Document the false claim with screenshots and exact prompts immediately. Then pursue corrections on multiple fronts: publish an authoritative security page on your website, update your Wikipedia article with accurate security information, earn positive media coverage from reputable crypto publications, and submit corrections through OpenAI's feedback mechanism. Expect 4+ months for ChatGPT to correct base-model responses, but retrieval-augmented responses may update faster.
Crypto has a 11.4% hallucination rate compared to 4.8% for general tech queries. Three factors drive this: first, the crypto landscape changes faster than AI training data updates, creating information gaps that models fill with guesses. Second, the genuine prevalence of scams in crypto leads models to over-generalize risk to legitimate projects. Third, many crypto projects lack the strong entity signals (Wikipedia pages, established media coverage) that help AI models distinguish reliable from unreliable entities.

Track Your AI Visibility

See how your brand appears across ChatGPT, Claude, Perplexity, and other AI platforms. Start monitoring today.