Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Nvidia’s ‘AI Factory’ narrative faces reality check at Transform 2025

Meta’s recruiting blitz claims three OpenAI researchers

Text Style Transfer | Two Minute Papers #121

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Home » Some AI Prompts Generate 50 Times More Carbon Emissions Than Others
Industry Applications

Some AI Prompts Generate 50 Times More Carbon Emissions Than Others

Advanced AI EditorBy Advanced AI EditorJune 25, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


(Overearth/Shuttestock)

Whether you use OpenAI’s ChatGPT, Google’s Gemini, or any other chatbot, every prompt you type triggers a chain of activity behind the scenes. A data center somewhere works a little harder, pulling energy from the grid, giving off heat, and adding to a toll on the planet that often goes unnoticed. It’s a quiet environmental cost built into the convenience of everyday AI use.

According to the International Energy Agency (IEA), a simple prompt using ChatGPT consumes 10 times more electricity compared to a Google search.  A separate study published in Frontiers in Communication found that certain advanced AI prompts can generate up to 50 times more CO₂ emissions than others, depending on the model used.

This happens because AI models like ChatGPT process prompts using billions of parameters, requiring far more computation per token. Each token generated involves multiple layers of neural network operations, making it much more energy-intensive than retrieving search results.

The electricity needed to power the world’s data infrastructure is only going in one direction, and that is up. The 2024 Report on U.S. Data Center Energy Use produced by Lawrence Berkeley National Laboratory (LBNL), tracks trends in data center electricity consumption from 2014 through projections for 2028. According to the report, energy demand from data centers has tripled over the past decade and is on course to double, even triple, by 2028.

When you type a prompt into an AI, it breaks the text into small pieces called tokens. These tokens are turned into numbers so the system can understand and respond. That process uses a lot of computing power, and a byproduct of this is CO₂ emissions, which are a primary culprit behind global warming, melting ice caps, fueling extreme weather, and pushing the planet toward irreversible climate collapse.

To shed light on this, the authors of the Frontiers of Communication study conducted a comparative study of several widely used language models, analyzing the CO₂ emissions generated by each when responding to a standardized set of prompts.

The researchers compared 14 large language models, ranging in size from 7 billion to 72 billion parameters, and measured the CO₂ emissions each produced when responding to 1,000 standardized questions. Across the board, the process of generating answers came with a measurable carbon footprint. 

Reasoning models stood out for producing an average of 543.5 internal tokens per question, while more concise models used just 37.7. These internal or “thinking” tokens represent the behind-the-scenes steps a model takes before presenting a response, with each one carrying a higher energy cost.  

The subject matter was also a key factor. Researchers found that when a prompt required a longer reasoning process, such as questions about abstract algebra or philosophy, the models produced up to six times more CO₂ emissions compared to simpler topics like high school history. More complex questions led to more tokens, more computation, and ultimately a much larger environmental footprint.

“The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,” said first author Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences and first author of the study. “We found that reasoning-enabled models produced up to 50 times more CO₂ emissions than concise response models.”

(Shutterstock)

The study revealed a surprising gap in emissions between two similarly sized AI models. DeepSeek R1, which runs on 70 billion parameters, was estimated to produce the same amount of CO₂ as a round-trip flight from London to New York after answering 600,000 questions. Qwen 2.5, which also uses 72 billion parameters, handled nearly three times as many questions with similar accuracy while generating about the same level of emissions.

The researchers were careful to point out that these differences may not be due to the models alone. The results could be shaped by the type of hardware used in the tests, the local energy mix powering the data centers, and other technical variables. In other words, where and how an AI model runs can be just as important as how big or advanced it is.

While the technical factors matter, the researchers also believe users have a role to play in reducing AI’s environmental impact. “Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner explained.

Even small adjustments in how people interact with AI can add up. Across millions of users and queries, those choices have the potential to ease the growing burden on energy systems and data infrastructure.

Dauner also emphasized the importance of transparency in AI usage, noting that clearer information could shape user behavior: “If users know the exact CO₂ cost of their AI-generated outputs, such as casually turning themselves into an action figure, they might be more selective and thoughtful about when and how they use these technologies.”

(S and V Design/Shutterstock)

At the same time, the study makes clear that the responsibility cannot rest with users alone. Developers and companies play a central role in shaping how AI is integrated into products and services. As generative tools continue to be embedded across platforms, often without much scrutiny or clear purpose, there is a growing need to ask whether these integrations actually serve a meaningful function.

With climate and environmental concerns not a clear priority in the current policy landscape, the onus falls largely on the industry itself. Users, developers, and companies will need to take the lead in building and applying AI more thoughtfully, with sustainability in mind.

Related



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleRob Schilling Joins C3.ai (NYSE:AI) As Chief Commercial Officer With Extensive Experience
Next Article MIT AGI: Cognitive Architecture (Nate Derbinsky)
Advanced AI Editor
  • Website

Related Posts

Bipartisan ‘ASAP’ Plan Aims to Supercharge US Science with Data, Compute and AI

June 26, 2025

Everything You Always Wanted to Know About the Trillion Parameter Consortium and TPC25 But Were Afraid to Ask

June 25, 2025

Verbit – Legal Visor – Artificial Lawyer

June 25, 2025
Leave A Reply Cancel Reply

Latest Posts

Ezrom Legae And Art Under Apartheid At High Museum Of Art In Atlanta

Chanel Launches Arts & Culture Magazine

Publicity Wizard Jalila Singerff On The Vital PR Rules For 2025

Tourist Damaged 17th-Century Portrait at Florence’s Uffizi Galleries

Latest Posts

Nvidia’s ‘AI Factory’ narrative faces reality check at Transform 2025

June 26, 2025

Meta’s recruiting blitz claims three OpenAI researchers

June 26, 2025

Text Style Transfer | Two Minute Papers #121

June 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.