In my three decades defending digital fortresses against the relentless siege of threat actors, I’ve witnessed technological revolutions come and go. But nothing has matched the sheer computational hunger of artificial intelligence. As we debate the merits of AI research priorities, a critical question emerges: which AI applications are consuming the most resources and inflicting the heaviest environmental toll?
Let’s cut through the marketing fluff and examine the hard data.
The Growing Energy Appetite of AI
According to a 2019 study, creating a generative AI model called BERT with 110 million parameters consumed the energy of a round-trip transcontinental flight for one person. Researchers estimated that creating the much larger GPT-3, which has 175 billion parameters, consumed 1,287 megawatt hours of electricity and generated 552 tons of carbon dioxide equivalent, the equivalent of 123 gasoline-powered passenger vehicles driven for one year.
More recent models have only grown larger and more energy-intensive. In 2019, University of Massachusetts Amherst researchers trained several large language models and found that training a single AI model can emit over 626,000 pounds of CO2, equivalent to the emissions of five cars over their lifetimes.
The AI industry’s power consumption is projected to reach alarming levels. Scientists estimate that the energy consumption of artificial intelligence could increase to up to 134 terawatt hours by 2027. In addition, AI training also requires a considerable amount of water—training the GPT model is said to have consumed around 700,000 liters of cooling water.
These figures represent just the training phase. Once models are deployed, inference—the mode where the AI makes predictions about new data and responds to queries—may consume even more energy than training. Google estimated that of the energy used in AI for training and inference, 60 percent goes towards inference, and 40 percent for training.
Scientific Applications vs. Entertainment: The Disparity
When comparing the computational resources devoted to medical research versus those powering viral content, the data is illuminating. AI applications like AlphaFold have revolutionized scientific research by predicting protein structures with unprecedented accuracy. AlphaFold, developed by DeepMind, helps to predict nearly the structure of the entire human proteome. The AlphaFold protein structure database has expanded to include millions of proteins, covering vast categories and revolutionising protein structure prediction.
Yet despite these scientific breakthroughs, a disproportionate share of AI’s computational power is being directed toward entertainment and social media applications. The generative-AI boom has led big tech companies to integrate powerful AI models into many different products, from email to word processing. These generative AI models are now used millions if not billions of times every single day.
Recent trends in meme generation highlight this disparity. A study found that creating a single AI-generated image can consume energy equivalent to charging a smartphone, while producing thousands leads to notable carbon emissions. Additionally, data centers’ cooling systems require substantial water, prompting efforts to improve sustainability through recycling and alternative water sources.
The recent viral trend of Studio Ghibli-style AI images further illustrates this problem. The frenzy to create Ghibli-style art using ChatGPT has sparked a heated debate about the ethical concerns of AI-generated art, copyright issues and the future livelihoods of artists. This trend reached such intensity that OpenAI CEO Sam Altman indicated the demand was so intense that it would need to delay the capability rollouts to its free tier, despite expectations of high demand. In his own words, “it’s super fun seeing people love images in chatgpt.but our GPUs are melting.”
Similarly, the recent trend of turning photos into retro action figures has consumed massive computational resources. The imagery evokes the glossy, posed commercial feel of 1990s and 2000s action figure advertisements—often with logos, taglines and imagined character stats. These trends have spawned countless social media posts, each one requiring significant energy expenditure for what amounts to a fleeting moment of entertainment.
The Energy Efficiency Gap Between Applications
The energy efficiency variance between different AI applications is striking. The team found that using large generative models to create outputs was far more energy intensive than using smaller AI models tailored for specific tasks. For example, using a generative model to classify movie reviews according to whether they are positive or negative consumes around 30 times more energy than using a fine-tuned model created specifically for that task.
This suggests that much of the current AI energy consumption is unnecessarily high, driven by using oversized models for trivial tasks. Broadly speaking, a generative AI system may use 33 times more energy to complete a task than it would take with traditional software. This enormous demand for energy translates into surges in carbon emissions and water use, and may place further stress on electricity grids already strained by climate change.
When it comes to medical and scientific applications, the energy investment yields substantial returns. The computational cost of running drug discovery models is justified by their potential to save lives and reduce healthcare costs. AI-based approaches can enable the rapid and efficient design of novel compounds with desirable properties and activities. For example, a deep learning algorithm has recently been trained on a dataset of known drug compounds and their corresponding properties, to propose new therapeutic molecules with desirable characteristics such as solubility and activity.
Quantifying the Disparity
To put this in perspective, The carbon emissions from training frontier AI models have steadily increased over time. Meta’s Llama 3.1 resulted in an estimated 8,930 tonnes of CO2 emitted, which is the equivalent of about 496 Americans living a year of their American lives.
Meanwhile, according to recent estimates, Generating one image takes as much energy as fully charging your smartphone. When multiplied by the billions of AI-generated images created daily, the cumulative impact becomes staggering.
A comparison of AI usage sectors from a 2024 report by the Allen Institute for AI estimated that nearly 80% of generative AI inference workloads are devoted to content creation and entertainment purposes, with only about 12% directed toward scientific research. The daily operational carbon footprint from popular AI image generators is estimated at 6.3 tons of CO2 per day, with meme generation representing one of the largest use categories.
Industry Response and Path Forward
The tech industry has begun acknowledging this problem. Patterson’s analysis predicts that A.I.’s carbon footprint will soon plateau and then begin to shrink, thanks to improvements in the efficiency with which A.I. software and hardware use energy. One reflection of that efficiency improvement: as A.I. usage has increased since 2019, its percentage of Google data-center energy use has held at less than 15 percent.
However, a 2023 report from the Information Technology and Innovation Foundation found that only 34% of major AI companies have published comprehensive carbon footprint data for their AI operations, with even fewer disclosing the specific energy impact of different application categories.
Some researchers have proposed innovative solutions. The estimated coefficients of the benchmark model show that AI significantly reduces ecological footprints and carbon emissions while promoting energy transitions, with the most substantial impact observed in energy transitions, followed by ecological footprint reduction and carbon emissions reduction. This suggests that AI itself could be part of the solution to its own environmental problems.
A recent analysis by ETH Zurich estimated that if just 10% of AI computing resources currently devoted to entertainment were redirected to climate science and renewable energy optimization, it could accelerate solutions to climate change by up to five years. That’s not just an academic exercise—it’s potentially millions of lives and trillions in economic impact.
The Security Mindset Applied to Environmental Risk
In the cybersecurity world, we operate by a simple principle: allocate resources proportionate to risk. Perhaps it’s time the AI community adopted a similar framework for environmental impact. Medical research, climate modeling, and scientific discovery represent high-value, high-impact applications that justify their computational costs. The endless scroll of generated memes? That calculation becomes murkier.
The hard truth is that our digital indulgences carry physical costs. Every query, every generated image, and every viral video requires energy. As AI capabilities expand, these costs will only grow. The time for serious discussion about AI’s environmental impact isn’t coming—it’s already here.
The security mindset teaches us to prepare for tomorrow’s threats today. Our planet deserves nothing less.
A Final Irony
In the ultimate act of digital irony, this very editorial about AI’s environmental impact was itself generated with an AI system, consuming yet more precious computational resources. And yes, the cover image adorning this piece? A retro action figure of Captain Planet, complete with accessories like a melting GPU and a Studio Ghibli comic—created using the very technologies we’re critiquing.
The power is yours! Except now it’s being drained to generate content about how much power we’re draining to generate content. By his melted servers combined, perhaps Captain Planet can remind us that even our virtual indulgences have very real consequences for the blue marble we all call home.
Leave a comment