The Per-Query Footprint Question
Aggregate AI energy numbers are large but abstract. Per-query estimates make the impact concrete and are the single most-cited AI-environment statistic in 2025-2026 journalism. This page consolidates the public per-query estimates for energy, water, and carbon, with explicit methodology so the figures can be argued with rather than taken on faith.
Key Findings
- A typical ChatGPT-class query consumes an estimated 0.3 to 3 Wh of electricity, with the most-cited mid-range estimate at approximately 1.5 Wh.
- A reasoning-mode query (extended thinking, multiple internal steps) consumes 5-15 Wh, materially higher than baseline.
- Google search, the standard comparison baseline, consumes approximately 0.3 Wh per query (Google's own 2009 disclosure, likely unchanged since).
- Per-query carbon emissions vary by data center grid mix from approximately 0.05 g CO2e (low-carbon grid) to 1.5 g CO2e (coal-heavy grid).
- Per-query water consumption in evaporative-cooled regions is estimated at 0.05-0.5 ml per query, scaling with ambient conditions.
Per-Query Energy Estimates
| Query type | Estimated energy (Wh) | Source |
|---|---|---|
| Google search | ~0.3 | Google blog 2009 |
| ChatGPT (GPT-4o-mini class) | ~0.5-1.0 | Independent estimate |
| ChatGPT (GPT-5 class, standard) | ~1.0-3.0 | Triangulated estimate |
| Claude Opus 4.7 (standard) | ~1.5-3.5 | Triangulated estimate |
| Reasoning mode (extended thinking) | ~5-15 | Triangulated; reasoning models generate thousands of internal tokens |
| Image generation (DALL-E 3 class) | ~3-7 | Hugging Face Energy Star measurements |
| Video generation (Sora-class, 5-second clip) | ~80-200 | Triangulated from compute disclosures |
| Voice interaction (per minute) | ~2-5 | Real-time inference plus TTS |
Energy figures vary by model size, prompt length, output length, and serving efficiency. Reasoning models are 3-10x more energy-intensive than standard chat queries because they generate internal thinking tokens before user-visible output.
Per-Query Carbon
Carbon emissions per query depend on the grid mix at the data center location. Estimated grid carbon intensities in g CO2e per kWh:
- Iceland (geothermal/hydro): ~30 g
- France (nuclear-heavy): ~50 g
- US average: ~370 g
- Texas: ~430 g
- Germany (post-nuclear phase-out): ~420 g
- India: ~700 g
- Coal-heavy regions: ~900-1000 g
For a 1.5 Wh ChatGPT query: 0.045 g CO2e on Icelandic grid, 0.55 g CO2e on US average grid, 1.4 g CO2e on coal-heavy grid. Hyperscalers procure renewable energy certificates against operational electricity, which reduces accounting carbon but does not always reduce real-time grid emissions.
Per-Query Water
Direct water consumption for evaporative cooling per query:
| Region | Per-query water (ml) | Notes |
|---|---|---|
| Northern climate, liquid-cooled | ~0.01-0.05 | Closed-loop cooling minimises direct water |
| Temperate climate, evaporative | ~0.05-0.15 | Standard hyperscale assumption |
| Hot dry climate (Arizona, Texas summer) | ~0.2-0.5 | High evaporation rates |
| Reasoning mode multiplier | ~3-10x baseline | Reflects energy multiplier |
Comparison: AI Query vs Other Activities
| Activity | Approximate energy | Equivalent ChatGPT queries |
|---|---|---|
| One ChatGPT query (standard) | ~1.5 Wh | 1 |
| One Google search | ~0.3 Wh | 0.2 |
| One YouTube video minute (1080p) | ~3-5 Wh | 2-3 |
| One Netflix show hour (1080p) | ~80-100 Wh | ~60 |
| One Bitcoin transaction | ~700,000 Wh | ~470,000 |
| One LED lightbulb hour | ~10 Wh | ~7 |
| Driving 1 km in EV | ~150 Wh | ~100 |
| One Sora video generation (5 sec) | ~150 Wh | ~100 |
Methodology Caveats
Per-query estimates carry significant uncertainty:
- OpenAI and Anthropic do not publish per-query energy figures; numbers are triangulated from compute capacity, query volume, and model size.
- Inference efficiency improves rapidly; estimates from 2023 are 2-5x higher than current efficient serving.
- Batching, KV-cache reuse, and speculative decoding all reduce per-query energy substantially.
- Reasoning-mode and tool-using queries can be 5-20x more energy-intensive than baseline; aggregating averages can be misleading.
Brand Visibility Implications
Per-query environmental cost is among the most-cited journalism topics around AI; brands seeking AI-mediated visibility on sustainability and AI-environment queries benefit from being represented in queries about energy-efficient inference, sustainable AI deployment, and green compute. As enterprise procurement adds sustainability criteria to AI vendor selection, this surface grows in commercial relevance.
Methodology
Per-query energy from de Vries (2023) "The growing energy footprint of artificial intelligence", Hugging Face Energy Star per-task measurements, and our companion AI data center energy page. Carbon intensity from Electricity Maps. Water estimates from hyperscaler sustainability disclosures. Estimates are directional with ±50 percent uncertainty, treat order-of-magnitude as load-bearing.
How Presenc AI Helps
Presenc AI tracks brand-mention rates inside AI assistant queries about AI sustainability, efficient inference, and per-query environmental cost, surfacing where sustainability-focused brand recommendations are made. For brands selling carbon-aware compute, efficient model serving, or sustainable AI infrastructure, this is the operational visibility into a high-citation discovery surface.