Today's article comes from the Sage Open journal. The authors are Hassan et al., from King Faisal University, in Saudi Arabia. In this paper they wanted to answer what sounds like a simple question: Is A.I. actually helping, or is it hurting our climate goals?
DOI: 10.1177/21582440251359735
Remember a few years ago when everyone was up in arms about crypto's energy consumption? Bitcoin miners were the villains, burning through enough electricity to power entire countries. And all just to validate their digital transactions. Environmental activists, rightfully, called out the absurdity of wasting massive amounts of energy (much of it derived from fossil fuels) to solve a problem that could be solved by any number of other (lighter-footprint) ways. The criticism was everywhere, and it was loud.
Fast forward to today, and something fascinating has happened. A significant portion of the crypto world has switched to proof-of-stake algorithms instead of proof-of-work. In some cases this has reduced the energy costs by several orders of magnitude. And those same energy-hungry GPUs that were powering crypto mining farms a few years ago, have now been repurposed. For what? You guessed it: machine learning.
So now we're looking at much the same (or even more powerful) hardware, running on the same racks, plugged into the same grids, running full tilt yet again. Same energy demands, same fossil fuels, same environmental impact...and sometimes quite a bit more. But now, as if by magic, the outrage has largely evaporated. Now that these processors are training AI models instead of mining Bitcoin, many (many) of the most vocal former-critics are now silent.
This brings us to the research we're looking at today. In this paper, the authors went searching for hard data on the environmental impact of AI. They wanted an answer to what sounds like a simple question:
Is A.I. actually helping or is it hurting our climate goals?
To find out, they analyzed decades of data, tracking AI development alongside carbon emissions, energy policy decisions, and environmental regulations. On today's episode we'll walk through their methodology, and see what they found. Let's dive in.
Most environmental studies assume that technology has a simple, one-directional relationship with emissions. Either a technology is good for the environment or it's bad, and the effect stays consistent. But in this study, the authors suspected (correctly) that AI is different. That it can simultaneously help and hurt the environment in ways that a traditional analysis can miss. To capture this complexity, they used a statistical approach that treats AI's environmental impact like a two-sided coin. Instead of looking for a single average effect, their method separately tracks instances where AI development reduces emissions and instances where it increases emissions. This approach reveals patterns that simple correlation analysis would completely overlook. Think of it like trying to measure the net effect of a new highway. Sometimes highways reduce emissions by improving traffic flow and reducing congestion. But sometimes they increase emissions by encouraging more people to drive longer distances. Traditional analysis would just look at the average effect across all highways. But this decomposition approach treats congestion reduction and induced demand as separate phenomena that need to be measured independently.
But what do I mean when I say that AI can both hurt and help the environment at the same time? Well, that's because AI (if applied to the right domains) can make us more energy efficient. And this isn't some theoretical abstraction, this is playing out right now, across industries. Machine learning algorithms are optimizing smart grids. Precision agriculture is helping farmers use fewer pesticides and fertilizers while improving crop yields. Route optimization and fuel efficiency improvements are helping shipping companies cut emissions. AI is helping enhance forestry management, and drive breakthroughs in materials science. It's having both direct and indirect effects on environmental and sustainability programs as we speak. All that being said, the computing power required to train and run these systems is massive. Training a single large language model (like ChatGPT) can require the energy equivalent of hundreds of households for an entire year. And every time you interact with it, that energy bill grows. The data centers that power most AI applications still (largely) get their energy from fossil fuels. This duality (that both the good things and the bad things are true at the same time) is what makes these kinds of research questions so hard to answer.
Here the authors had to track both positive and negative changes over time. To do it, they created running totals that captured when AI development helped the environment and when it hurt. This decomposition allowed them to estimate separate effects for each type of impact and determine which one dominated.
They employed a technique called Wavelet Time Coherence. This is commonly used in fields like signal processing and climate science because it can reveal patterns that change over time. Traditional statistical analysis gives you a snapshot of relationships, but wavelet analysis shows you how those relationships evolve. In this case, it helped the researchers understand whether AI's environmental effects were consistent across different time periods or whether they evolved as the technology matured. This temporal dimension is crucial because the environmental impact of a technology often changes as it scales from laboratory prototypes to widespread deployment.
So what did they find?
Namely, their analysis revealed that AI's environmental effects vary dramatically depending on the timeframe you're looking at. In the early years of AI, the technology was actually associated with emissions reductions. This is likely because early applications focused on big wins: big efficiency improvements for costly processes. But as the analysis walked forward, to capture longer time periods, the negative effects began to accumulate and then dominate. In other words: the full environmental costs of AI development took time to materialize, but once they showed up, they swamped the benefits.
This time-dependency helps explain why previous studies on AI's environmental impact have reached such varied conclusions. Studies that focus on immediate effects often see AI as environmentally beneficial. While studies that capture longer-term impacts see the opposite trend. The wavelet analysis shows that both perspectives capture real phenomena, but they're looking at different phases of the technology's lifecycle.
To go further, the authors employed a multidimensional analytical framework. This allowed them to effectively slice the data in multiple directions to expose hidden patterns. Their analysis revealed that AI's environmental footprint is asymmetric. That is: as scale intensifies, the negative impacts are growing disproportionately. This suggests that we might be approaching an inflection point, after which environmental costs spike.
So what can we do about it? Well this is largely a macro (policy) question. Interventions such as carbon pricing tailored to AI's computational intensity, mandatory efficiency standards for data centers, and expansion of green R&D could possibly bend the emissions curve. But without deliberate regulatory and institutional action, AI's environmental costs will likely continue to outweigh its benefits.
Ultimately, this research confronts policymakers with a challenge. The tools to align AI with carbon neutrality do exist, but they need to be sought out, chosen, applied and enforced. Something that there is currently little political will to do. And since we're (apparently) nearing a tipping point, those decisions need to happen immediately. Not tomorrow, today. Before the accelerating trajectory of AI locks in climate costs that are irreversible.
If you want to dive deeper into the econometric methods, the cointegration testing procedures, or the authors' specific policy recommendations, I highly recommend downloading the full paper. The authors provide a number of robustness checks, additional diagnostic tests, and a detailed discussion of how their findings compare to previous research.