Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

European Commission & AI: Guidelines on Prohibited Practices | Paul Hastings LLP

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Why an AI energy crisis may not unfold how you think: IBM sustainability chief
Finance AI

Why an AI energy crisis may not unfold how you think: IBM sustainability chief

Advanced AI BotBy Advanced AI BotApril 16, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Headlines about AI’s voracious appetite for energy are painting a dystopian picture: a national energy emergency, paralyzed power grids, dormant dishwashers at home, and even the resurrection of the Three Mile Island nuclear facility. It sounds like the script of a tech horror film. But here’s the plot twist—we’ve seen this movie before, and it has a surprisingly elegant ending.

Remember the early 2000s, when computer center energy use doubled and everyone predicted that data centers would devour our power grid? That story took an unexpected turn. While computing power skyrocketed 500% between 2010 and 2018, corresponding energy use crept up by just 6%.

The secret wasn’t more power plants, it was smarter design—specifically, with energy efficiency. Now we’re about to watch that story unfold again, but with AI in the starring role.

Energy-efficiency innovations are uniquely powerful at fueling growth because their benefits can apply to both existing and future units, lowering both current and future energy demands with one stroke.

The art of energy-efficient AI

Last year, McKinsey shared survey results in which 65% of respondents—nearly double the previous year—said their organizations regularly use gen AI in at least one business function. This year, that increased to 71%. With so many organizations moving so fast, they have often been forced to tap whatever infrastructure and models were available, as fast as possible.

That’s resulted in tales of hastily built data centers fueled by highly polluting natural gas generators and massive energy-hungry LLMs being used for relatively modest aims. But such outcomes are also expensive, and as companies continue to bear these costs it is only natural that they will choose more efficient models and shift more workloads onto fit-for-purpose chips.

Chips, connections, and models

AI efficiency innovations are happening on three fronts: chips, connections, and AI architecture itself.

AI-related chips have already improved their energy intensity by over 99% since 2008, and we are continuing to see new advances regularly. In December, MIT researchers demonstrated a fully integrated photonic processor that could enable faster and more energy-efficient deep learning. At IBM, our own researchers developed a brain-inspired prototype that is 25x more energy efficient.

Another area where innovation will reduce AI’s energy needs is the connections between chips. Even as transistors have gotten smaller and allowed a given space to pack more “punch,” chips are only as fast as the connections between them. And today’s most advanced chip circuitry relies on copper-based electrical wires, which can mean that GPUs running AI workloads can spend more than half their time idle “waiting” for the data to process.

In December, we saw the first success in overcoming engineering challenges to replace these wires with optics—each polymer fiber 3x the width of a human hair—that can allow up to 80x more bandwidth. This speed-of-light data transfer unlocks the full potential of a data center and results in 5x less power needed to train a frontier LLM. Unlocking this wasted time from existing stock is like having a bunch of back-ordered GPUs delivered immediately, for free, with no additional energy costs.

Lastly, there are exciting opportunities to redesign AI itself—often spurred forward by open-source AI communities. Techniques like “knowledge distillation” let us create sleeker, more efficient AI models by having them learn from larger ones. Think of it as passing down wisdom through generations. Low-rank adaptation (LoRA) allows us to fine-tune massive models with surgical precision, turning LLMs into more specialized models without the energy costs of rebuilding from scratch.

Perhaps the most elegant solution is the mixture-of-experts approach. Instead of using one AI model to handle everything, it breaks tasks into smaller pieces and routes them to specialized mini-models. It’s the difference between powering up an entire office building versus just lighting the room you need.

Stacking innovations for exponential impact

These are just a handful of innovations underway with more efficient AI, but they are not “around the edge” improvements.

Take co-packaged optics alone, which can bring 80% energy savings to LLM training—the equivalent of running two small data centers for an entire year. If instead you take several innovations—with chips, connections, and models themselves—and introduce them throughout the world, you can imagine how the energy savings might stack to the equivalent of not just Three Mile Island, but many nuclear power plants—with a fraction of the cost or risk.

The last year has been one of AI excitement, adoption, and, yes, massive costs. But foundation models are like reusable rockets. The upfront costs on research, engineering, and more can be staggering, but every additional use of that model amortizes those costs by yet another outcome. And foundation models are a lot more reusable than rockets.

Repeating history

Raising a flag over AI’s energy use makes sense. It identifies an important challenge and can help rally us toward a collective solution. But we should balance the weight of the challenge with the incredible, rapid innovation that is happening.

For businesses, the flag should have two words written on it: Be intentional! At every part of the AI stack. Companies are already moving toward smaller, cheaper, task-specific models, and as innovations are commercialized this will drive down costs and energy use even more.

We should remember what happened with the earlier cycle of computing and energy use—and lend all our support to repeating it.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

Read more:

AI energy demand means innovation must crackle in an unlikely place: Electric utilities

I sold a $1.4B big-data startup to IBM. Here are the dangers of AI energy consumption

AI innovation isn’t a climate threat, it’s our best hope

This story was originally featured on Fortune.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleBuilding a foundation with AI to jumpstart your journalism
Next Article EU Commission: “AI Gigafactories” to strengthen Europe as a business location
Advanced AI Bot
  • Website

Related Posts

AI could unleash ‘deep societal upheavals’ that many elites are ignoring, Palantir CEO Alex Karp warns

June 7, 2025

UK judge warns of risk to justice after lawyers cited fake AI-generated cases in court

June 7, 2025

Senate Republicans revise ban on state AI regulations in bid to preserve controversial provision

June 6, 2025
Leave A Reply Cancel Reply

Latest Posts

The Timeless Willie Nelson On Positive Thinking

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Latest Posts

European Commission & AI: Guidelines on Prohibited Practices | Paul Hastings LLP

June 8, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 8, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.