Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Cohere to access Canadian AI infrastructure and new clients through partnership with Bell

Chinese firm launches open-source AI model, achieves technical breakthroughs via capability integration

Litera Expands Kira With Added GenAI Features – Artificial Lawyer

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
IBM

IBM’s fault-tolerant quantum computer coming in 2029

By Advanced AI EditorJune 11, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Forgotten in the buzz of WWDC 2025, IBM’s quantum computing announcement might not have been as flashy – but it was quite revealing, not unlike looking through clear liquid glass. IBM’s announcement lays out a remarkably concrete roadmap to deliver Starling, the first large-scale, fault-tolerant quantum computer, by 2029. 

Housed in a new IBM Quantum Data Center in New York, Starling promises to execute some 20,000 times more quantum operations than any system in operation today. That’s some marketing hyperbole, as representing its state would require memory equivalent to over a quindecillion or 1048 of today’s fastest classical supercomputers.

Also read: Google Willow quantum chip explained: Faster than a supercomputer

This isn’t IBM’s first quantum roadmap. But it is the first time the company has publicly laid out detailed processor milestones – Loon (2025), Kookaburra (2026), Cockatoo (2027), culminating in Starling (2029) – each testing critical elements of error-corrected, modular architectures based on quantum LDPC or qLDPC codes to slash qubit overhead by 90% relative to previous codes. In plain terms, IBM is tackling the quantum equivalent of the classical “vacuum tube to transistor” transition, which is nothing short of revolutionary, if you ask me.

Journey from ENIAC to quantum entanglement

To appreciate Starling’s significance, we need to rewind to the dawn of programmable computing. Time for a much-needed history lesson. In 1946, ENIAC, a 30-ton behemoth, calculated artillery tables at just 5,000 additions per second. A few years later, IBM’s 650 drum-memory system, at roughly half a million dollars per unit, became the first mass-market computer, ushering in an era of batch processing for business and science.

IBM’s history is tightly woven into both classical and quantum threads. In the 1950s, IBM’s transistor-based computers replaced vacuum tubes, notably with the IBM 1401, making data processing affordable for mid-sized businesses. A decade later, System/360 pioneered architecture-compatible families, defining the mainframe era.

Also read: IBM reveals faster Heron R2 quantum computing chip: Why this matters

Fast-forward to 2011, that’s when IBM launched the first 5-qubit quantum processor at Yorktown Heights. Each subsequent leap – 16 qubits, 50 qubits – tested coherence times and basic algorithms but remained “noisy intermediate-scale quantum” (NISQ) devices, limited by error rates and decoherence. It wasn’t until IBM’s Nature-cover paper on quantum LDPC codes that a clear, efficient path to error-corrected logical qubits emerged .

Now, with Starling and beyond, IBM is uniting decades of hardware, control electronics, and algorithm research into a practical vision: 100 million quantum operations on 200 logical qubits, followed by 1 billion ops on 2,000 logical qubits with Blue Jay in 2033. This roadmap is more than ambition – it’s a stepwise de-risking of large-scale quantum computing, much like IBM’s systematic moves in classical eras.

What’s so special about fault tolerance?

Classical hardware (which is what all our modern PC, laptop, smartphones comprise) is remarkably reliable – bits stay intact and errors are rare. Quantum systems, by contrast, suffer from noise, where stray electromagnetic fields can flip qubit states in microseconds. Fault tolerance, then, is the foundational pillar of quantum computing’s promise – where thousands of noisy physical qubits are bundled together into a single logical qubit, whose error rates shrink exponentially with cluster size.

Also read: Amazon introduces Ocelot, its first quantum computing chip: All you need to know

IBM’s papers detail how qLDPC codes and real-time decoding with classical processors will keep these logical qubits coherent long enough to run meaningful algorithms. It’s the quantum counterpart of error-correcting memory in classical RAM, but orders of magnitude more complex. Achieving this at scale – without insane overhead – was a longstanding hurdle. Starling promises to break that barrier.

The quantum computing road ahead

What’s the need for quantum computing? Think of it this way, that classical computing gave us the internet, climate models, and AI that warns us of pandemics. But certain frontiers – like simulating complex molecules for drug discovery or solving massive optimization problems – remain tantalizingly out of reach. This is where quantum computers shine.

IBM Quantum Starling stands as the next great expedition into the great unknown of computational limits. If delivered by 2029, it will let chemists model reactions at quantum fidelity, economists optimize vast portfolios, and cryptographers test new protocols. And just as the transistor’s debut reshaped society, so might this first fault-tolerant quantum turn a scientific corner.

Also read: Shaping the future of quantum computing: Intel’s Anne Matsuura

IBM’s role in all of this is unique: pioneering transistor-era hardware, building mainframes that powered global commerce, and now forging quantum architectures. Few organizations possess the cross-disciplinary mastery to span vacuum tubes to qubits. 

Yet one can’t help but be cynical about their ambitious timeline. Because classical progress took decades per leap. But IBM’s quantum research team must compress that into a few short years lest other players – academia, startups, or global rivals – capture the lead. Yes, IBM isn’t the only tech company fiddling with quantum computing breakthroughs – there’s Google, Intel, Amazon, Microsoft and few others are also in the race.

From ENIAC’s vacuum tubes to the IBM System/360’s transistor logic, from Google’s Willow chip demonstration to today’s IBM Starling blueprint, we’re witnessing a continuum of human ingenuity. IBM’s new roadmap doesn’t just chart a technical path, but aspires to create a computing revolution. As we count down to 2029, the real question isn’t just whether IBM can build Starling – it’s whether we can harness its power responsibly to tackle the world’s most pressing challenges.

Also read: Quantum computing’s next leap: How distributed systems are breaking scalability barriers

Follow Us on Google NewsFollow Us on Google News Follow Us

Jayesh ShindeJayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile





Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleMIT bans class president who gave pro-Palestine speech from commencement
Next Article Carnegie Mellon Debuts Initiative to Combine Disparate AI Research — Campus Technology
Advanced AI Editor
  • Website

Related Posts

How India Powers IBM’s Hardware Design Push

July 28, 2025

For Now, AI Helps IBM’s Bottom Line More Than Its Top Line

July 27, 2025

Earnings Shock: Why IBM, Chipotle, and American Airlines Tumbled—and What Comes Next

July 25, 2025
Leave A Reply

Latest Posts

Scottish Museum Group Warns of ‘Policing of Gender’—and More Art News

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Latest Posts

Cohere to access Canadian AI infrastructure and new clients through partnership with Bell

July 28, 2025

Chinese firm launches open-source AI model, achieves technical breakthroughs via capability integration

July 28, 2025

Litera Expands Kira With Added GenAI Features – Artificial Lawyer

July 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Cohere to access Canadian AI infrastructure and new clients through partnership with Bell
  • Chinese firm launches open-source AI model, achieves technical breakthroughs via capability integration
  • Litera Expands Kira With Added GenAI Features – Artificial Lawyer
  • Qwen 3 Coder vs GPT-4.1: Why Developers Are Making the Switch
  • MIT device could deliver more energy-efficient computing, communications

Recent Comments

  1. binance推薦獎金 on [2407.11104] Exploring the Potentials and Challenges of Deep Generative Models in Product Design Conception
  2. психолог онлайн индивидуально on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. GeraldDes on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. binance sign up on Inclusion Strategies in Workplace | Recruiting News Network
  5. Rejestracja on Online Education – How I Make My Videos

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.