Contextualising AI assistants' environmental impact
Why individual usage and infrastructure growth are different equations
I’ve written before about the environmental impact of AI and my view that one can (and should) be concerned by the rapid growth in data centres powered predominantly by fossil fuels and reckless infrastructure decisions, without worrying too much about the impact of one’s own individual interactions with an AI assistant.
I’ve also written about how inaccurate or outdated stats about AI’s environmental impact have been repeated and amplified, while corrections get less attention (an extreme recent example: a key water consumption figure in the bestselling book Empire of AI was overstated by a factor of 1,000).
Others have written detailed technical analyses of this topic. Not everyone’s up for digging into the detail though, so I think a simplified comparative view that provides some context for AI assistants’ environmental impact, relative to other activities and industries, could be helpful.
First, some important caveats:
1. It is impossible to get accurate and wholly comparable figures across these different activities and industries. Consequently, these are all best-effort estimates1 that aim to contextualise AI’s environmental impact in terms of orders of magnitude.
2. The water consumption and carbon footprint of AI assistants and data centres are particularly tricky to pin down, partly because AI companies haven’t been as transparent as they might in sharing data and partly because it’s a moving target. AI models are getting materially more efficient to train and run, but they are also increasingly doing more compute-intensive processes in day-to-day operation (e.g. video generation, reasoning, other agentic tasks).
3. My carbon footprint estimates assume fossil fuels are being used to meet a high proportion of the energy demands of the data centres. These figures could be materially lower if a higher proportion of green energy was used.
4. The AI figures are likely to be overstated but I wanted to err on the side of overestimating rather than underestimating AI’s impact. Google published figures last summer suggesting “the median Gemini Apps text prompt…emits 0.03 grams of carbon dioxide equivalent (gCO2e), and consumes 0.26 milliliters (or about five drops) of water”. My estimates are around 30 times higher than Google’s as I wanted to take account of indirect impacts that Google’s per-prompt figures don’t fully capture (especially the water consumed in electricity generation upstream of the data centre) and the fact that real-world usage varies hugely by grid, cooling method and workload.
5. I have attempted to estimate the water consumption and carbon footprint of data centres’ AI workload, because a significant proportion (three-quarters according to JLL Research) of data centre usage is currently unrelated to AI.
6. I am not an expert in energy or water usage. I have researched this to the best of my ability but welcome evidence-based challenges to these orders-of-magnitude estimates.
What these graphics don’t show is how rapidly the water and energy consumption of data centres is increasing. An increase not principally driven by current usage of AI assistants, but a result of an almighty competitive hedge by the tech giants (supported by the US and Chinese governments) to not be caught without the processing power they believe they might need in the future, which is leading them to build out massive new data centres and sign utility contracts.
I believe day-to-day usage of AI assistants is going to account for an increasingly small proportion of AI-related energy and water consumption. I remain concerned about the potential environmental impact of AI video generation and carelessly deployed AI agents, which both require significantly more compute (and therefore energy and water) than conversations with AI assistants.
Rather than individuals boycotting AI assistants, we need governments to require transparency and accountability for AI companies’ energy and water infrastructure strategies and responsible deployment of more compute-intensive models (good to see Microsoft taking some initiative on the former).
These estimates draw from published academic research, industry reports, and company disclosures where available. For AI assistant usage, I started with Google’s published per-prompt figures for Gemini and adjusted upward to account for indirect impacts (particularly water consumption in electricity generation) and real-world variation across different grids and cooling methods. For comparison activities, I used established lifecycle assessment data where consensus figures exist. Industry-scale estimates extrapolate from data centre industry reports (particularly for the proportion of AI vs non-AI workloads) and published water/energy consumption data.
Where sources conflicted, I prioritised recent peer-reviewed research and original sources over secondary reporting. The goal was to establish defensible orders of magnitude rather than precise calculations.





