We use AI tools every day. They finish our sentences, rewrite our emails, summarise our chaos and hallucinate confidence on demand. They feel weightless, frictionless, almost immaterial, ittle boxes that simply answer. But are we even remotely aware of the environmental costs hiding behind that convenience?

To understand just how distorted our public picture of AI’s footprint has become, I’m examining Misinformation by Omission: The Need for More Environmental Transparency in AI (Luccioni et al., 2025), one of the latest and clearest pieces of research on this topic. And what it reveals is deeply uncomfortable: the almost total absence of real information. In that silence, myths thrive, assumptions harden and wrong numbers circulate as long as they’re treated as fact.

AI’s Footprint Is Physical

The paper begins with the point the industry works so hard to obscure: AI is not a floating abstraction. It is hardware. It is minerals, manufacturing, data centres, racks of GPUs, kilometres of copper and kilometres more of cooling pipes. For every “intelligent” output, something somewhere heats up, consumes water or emits carbon.

None of this is theoretical. Research already exists mapping hardware emissions, training energy, inference energy, global grid intensity and even the air pollution caused by GPU clusters. We have tools, methodologies and know how to measure environmental cost. We have previously researched in How much does a “thank you” cost?, where we cover multiple popular LLMs and ecological footprints in real numbers.

What we do not have is disclosure from the people actually running the world’s largest AI systems. Companies that measure everything else, for example: latency, throughput, performance, user retention; suddenly lose the ability to count when the topic turns to climate.

It would be funny if it weren’t catastrophic.

15 Years of Models

To understand this collapse in transparency, the authors reviewed data from 754 notable AI models released between 2010 and early 2025. What they found was not a comforting evolution toward ecological responsibility. Quite the opposite.

Early models, back when deep learning was still academic only, disclosed almost nothing. As environmental awareness grew, things briefly got better. As environmental scrutiny increased around 2019, things improved. Around 2022, a tiny golden window opened: some teams published energy consumption, some provided enough detail for outsiders to estimate carbon footprints, and open-weight releases made 3rd party analysis possible.

And then the commercial era hit, ChatGPT and similar models happened and transparency collapsed like a wet cardboard box. The instant AI became a trillion-dollar arms race. By 2025, most high-profile models again disclose nothing: no compute used, no carbon produced, no water consumed, no data centre location. The resource footprint of the world’s most influential technology is being wrapped back into secrecy under the mask of “competitive advantage”.

The Reality Check

The authors analyse real-world usage data from OpenRouter, a major API provider. It’s depressingly illustrative. Out of the 20 most-used models, only a single one (Meta Llama 3.3 70B) has direct environmental reporting. 3 (DeepSeek R1, DeepSeek V3, Mistral Nemo) offer enough technical detail for someone else to estimate their footprint. The remaining 16 provide nothing. That means that when millions of people ask their favourite chatbot for advice, or generate text, or feed it tasks 24/7, they have no idea what the environmental cost of those interactions actually is. The industry tells them to “use AI responsibly” while hiding the information required to define “responsible”.

People can’t make sustainable choices when the information required for those choices is kept deliberately out of reach. And into this vacuum, ignorance, false data and misinformation gets widely accepted.

Environmental Myths

The paper mentions the 3 most persistent AI environmental costs myths in circulation. Let’s see what those are and where they come from:

Myth 1: “Training an AI model emits as much CO₂ as 5 cars in their lifetimes.”

The first myth is the famous “training an AI model emits as much CO₂ as 5 cars in their lifetimes.” This figure came from a respected 2019 study, specifically from a specialised, massively resource-intensive neural architecture search run. It was an extreme case, clearly contextualised in the original paper. Then someone tweeted it. Then journalists lifted the number without reading the methodology. Then headlines transformed it into a universal truth about “AI training”. Eventually it became so widely repeated that it detached entirely from its origin, like a rumour nobody remembers starting. What makes this doubly ridiculous is that modern pre-training jobs can far exceed this number, so the myth is both sensational and outdated.

Myth 2: “A ChatGPT query uses 3 watt-hours, 10 times a Google search.”

The second myth is the confident pronouncement that “a ChatGPT query uses 3 watt-hours, 10 times more than a Google search.” This too is a Frankenstein’s monster: one executive at Alphabet, who did not work with OpenAI, made an offhand remark on Reuters. A researcher interpreted it generously. Someone else dug up a Google search-energy estimate from 2009. Journalists stitched these together into a comparison that would be at home in a high-school maths project. The authors analysed 100 articles about “ChatGPT energy use” and found that almost all repeated numbers without any discussion of uncertainty or provenance.  The public was fed precise-sounding measurements that were built almost entirely on speculation. If companies published per-query data, this myth wouldn’t have survived a week.

Myth 3: “AI can reduce 5-10 percent of global emissions.”

This is the most corporate of the myths and the most convenient. This comes from a consulting report that offers little more than optimistic anecdotes. There is no rigorous methodology, no sector breakdown, no modelling of rebound effects, no counterfactual scenarios. But because the number flatters both tech companies and policymakers, it spreads freely. It suggests that AI is not only harmless but actually a climate remedy. It also allows companies to position AI expansion as a form of ecological virtue, which is a particularly convenient story when actual environmental metrics are kept under lock and key.

Opaque Systems Make Bad Decisions

The paper makes it clear that misinformation carries real consequences. Regulators rely on distorted figures to build policy. Organisations use faulty assumptions to purchase or deploy models. Public perception swings between overestimation and naïve techno-optimism. Companies promise net-zero strategies that conveniently ignore the emissions of their AI pipelines. All of this could be avoided with consistent, verifiable data.

The Necessary, Unglamorous Fix

The authors outline solutions that are neither revolutionary nor optional. Developers must measure and disclose real environmental metrics for training and inference. Organisations must integrate AI’s footprint into their own sustainability accounting. Standards bodies must harmonise methodologies. Policymakers must require AI-specific reporting under existing climate frameworks. And most importantly, emissions must be measured based on the actual grids powering the hardware, not on accounting tricks that make emissions disappear on paper while they accumulate in the atmosphere.

These aren’t radical demands. They are the bare minimum expected of any mature industry, especially one that claims to be leading humanity into the future and having adult AI models in children’s toys that spur obscenities.

If AI Wants to Shape Tomorrow, It Must Admit Its Costs Today

None of these steps will happen spontaneously. The industry has already shown that, left to its own devices, it will choose opacity every time. But environmental impact is not something that goes away because someone says “proprietary”. AI may be digital, but its costs are physical: mined minerals, massive training runs, cooling water, electricity grids.

The final message of the paper is brutally simple: as long as AI companies refuse to disclose environmental data, the public conversation will be built on distortions, not evidence. I’m not anti AI. I strive for usage that is transparent, used for science and improving our lives. However, most LLM companies seem to take the path of profit, out of our loneliness and societal issues, considering the rise of synthetic harm and the societal implications AI has on spreading misinformation.

Transparency is not a threat to innovation. It is the bare minimum we should expect from any industry whose environmental footprint is already punching above its weight. If AI wants a future, it cannot keep acting like its environmental costs and emissions are tomorrow’s someone else’s problem.

It is hard to claim you are building the intelligence of tomorrow, when you refuse to admit the cost of running it today.