I’m worried about the environmental impact of AI.
More specifically, I’m worried that the arms race between companies and nations to ‘win’ (or at least not lose) in AI is leading to reckless decision-making around infrastructure development and AI system design and deployment.
Emboldened - and sometimes directly facilitated - by the Trump administration, US tech companies are investing in data centres at an unprecedented level, with AI being a primary driver.
There’s less visibility of Chinese investment in data centres, although it’s fair to assume it is significant (reportedly ahead of demand).
And other nations are increasingly joining the party.
Once up and running (construction also has significant environmental impact), data centres have two major environmental impacts: carbon emissions from electricity generation (depending on the energy sources used to power the processors) and water consumption (to cool the processors).
Whilst there are less environmentally impactful ways of generating energy and managing cooling, the speed and magnitude of the investment in data centres is not being tempered by the constraints of doing so using clean energy.
Some context
From some of the media coverage, you’d be forgiven for thinking that data centres already accounted for a significant proportion of global CO2 emissions. In practice, most estimates have data centres accounting for less than 1% of total CO2 emissions. A recent report from the International Energy Agency (an autonomous intergovernmental organisation) concluded: “Despite rapid growth, data centres remain a relatively small part of the overall power system, rising from about 1% of global electricity generation today to 3% in 2030, accounting for less than 1% of total global CO2 emissions”.
And AI isn’t the first, or currently primary, use of data centres. Most of the internet is now hosted on data centres. Every YouTube/Netflix view, TikTok/Instagram/LinkedIn scroll and Zoom/Meet/Teams call is making countless calls to data centres. Right now, your phone is likely carrying out various background tasks, such as checking emails, backing up photos and videos - all via data centres.
However, AI-related data centre usage is growing rapidly and even a low single digit percentage contribution to CO2 emissions is still problematic in the context of the current climate emergency.
Reasons to be hopeful
A year ago, most AI companies were working on the assumption that training larger and larger models (a very energy intensive process) would continue to yield better results.
Most of those companies have now accepted that scaling isn’t the (only) answer.
Companies like DeepSeek have shown that there are more efficient ways of delivering improvements in model performance (such as Mixture of Experts) and there has been considerable progress in the developments of models which can be run locally on consumer hardware, rather than in data centres.
Reasons not to be hopeful
However, some other developments threaten to outweigh more efficient model training and operation.
Reasoning and Deep Research are both more energy intensive than regular AI chatbot queries.
The proliferation of AI agents also threatens to add a significant multiplier to the amount of AI inference being carried out.
Is the appropriate response to boycott AI?
I don’t believe so. Having read the analysis of folk like Andy Masley (Director of Effective Altruism, DC) and Hannah Ritchie (Senior Researcher at Oxford University) and done a fair bit of my own research, I don’t believe boycotting AI is the right response (it’s also going to get increasingly hard to do, as AI gets woven into the tools and operating systems most of us use every day).
I won’t attempt to summarise Masley and Ritchie’s insights and instead encourage you to read their articles. A couple of quotes to whet your appetite:
“Preventing ChatGPT searches is a hopelessly useless lever for the climate movement to try to pull…Even if everyone stopped using all AI chatbots, AI’s energy demand wouldn’t change in a noticeable way at all.” (Andy Masley)
“after looking at the data on individual use of LLMs, I have stopped worrying about it and I think you should too” (Hannah Ritchie)
How might we respond?
So, if boycotting isn’t the proportionate response to worrying about the environmental impact of AI, what might be? Here are three suggestions:
1.) Be mindful of the relative impact of different AI uses (and non-AI uses) and don’t be profligate (the AI equivalent of leaving a tap running)
As per Hannah Ritchie’s conclusion, I don’t think it’s necessary to ration one’s day-to-day usage of AI chatbots for environmental reasons (there may be other reasons to do so but that’s a post for another day).
However, I do think it’s worth considering whether more energy-intensive AI processes (e.g. video generation) are worthwhile and what the alternative/counterfactual is.
As AI becomes an increasingly viable option for production-grade special effects and B-roll, what’s the relative environmental impact of using AI versus other approaches?
How would it compare with the energy usage involved in rendering via traditional CGI? How would it compare to the energy usage involved in getting a production team on location?
At one extreme we might put the infamous ‘smell of napalm in the morning’ scene in Francis Ford Coppola’s 1979 film Apocalypse Now, which reportedly involved setting fire to 1,200 gallons of gasoline.
Most examples won’t be that clear cut (and most of us won’t have the option of professional-grade CGI or a full production crew).
Whilst there’s evidence that my prediction that ‘AI assistants will get better at triage / air-traffic control’ is slowly coming to pass, right now it’s largely on the user to work out which AI model (if any) would be appropriate for a given task.
That’s not really fair - especially when the models have such esoteric names - but my general rule of thumb is to use the simplest tool available and only turn to a more powerful tool if the simpler tool isn’t up to the job.
In practice this means using AI assistant’s ‘everyday’ model for basic tasks and only turning to Reasoning or Deep Research models/modes for tasks which require a deeper level of ‘reasoning’/research.

2.) Consider carbon-offsetting your AI usage and/or making lifestyle changes which reduce your carbon footprint
If you use AI a lot for energy-intensive activities (e.g. deep research, video generation), you might want to consider carbon-offsetting your AI usage.
I struggled to find a tool that would work this out for me so I created one and offset my usage via Ecologi.
I’ve used the best estimates I various deep research models could find of the various activities but there’s a dearth of data available, much of which conflicts. It’s also clearly a moving target, as models simultaneously become more efficient *and* carry out more energy-intensive processes.
Research published in February suggests the oft-quoted ‘a ChatGPT query uses 10 times the energy of a Google search’ data point may be out-of-date, with estimates putting a standard ChatGPT query pretty much on a par with a Google search query.
I’m sharing a link to my makeshift impact calculator (vibecoded using Create) with the big caveat that the figures are indicative, reflecting rough orders of magnitude of CO2 emissions and water usage. I hope someone more qualified creates a more accurate version of this which I can then link to.
Even better than carbon offsetting is making changes which reduce your carbon footprint. Thinking about the environmental impact of AI relative to other activities has prompted me to make adjustments in other areas of my life. Andy Masley’s shower illustration has stuck with me: “If you want to prompt ChatGPT 40 times, you can just stop your shower 1 second early”.
3.) Vote for political parties with progressive green energy policies and lobby your current representatives on this topic
I got very short shrift at a recent talk I gave for suggesting that lobbying one’s representatives was an appropriate response to the challenge of AI’s energy use, but I believe that meaningful impact on this will have to come from governments legislating rather than from individuals abstaining.
If you really care about the environmental impact of AI, I’d suggest directing your attention at government rather than judging individuals making use of it (or, if you must judge, ensure you apply equal or greater judgement to anyone posting on LinkedIn, convening a Zoom call, or eating an avocado).
The UK creative industries have rallied to demand transparency (and control) over training data. We also need to demand greater transparency over the environmental impacts of training and running AI models.
I’ve been wanting to post on this topic for a while but am conscious of my lack of deep knowledge around it. Hopefully this has been a useful introduction. Interested to be directed to other insights on this topic.