Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Build a conversational data assistant, Part 1: Text-to-SQL with Amazon Bedrock Agents

AWS unveils custom GPU cooling system for Nvidia AI servers

Google Acquires AI Coding Tech from Windsurf in $2.4 Billion Deal

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
IBM

Power11 processors: IBM promises 99.999 percent uptime

By Advanced AI EditorJuly 10, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email



Power11 processors: IBM promises 99.999 percent uptime

IBM presents the 11th generation of its Power processors for servers with Linux, AIX or IBM-i. The Power11 also remains an exotic product compared to x86 CPUs from AMD and Intel as well as ARM alternatives: IBM is not aiming for maximum performance, but serves a niche market that demands extremely high reliability, among other things.

With tricks at chip and server level, IBM promises an availability (uptime) of 99.999 percent. It is the “most fail-safe server in the history of the IBM Power Platform”, writes the company in its press release.

Same core configuration with more redundancy

Just like its predecessor Power10, the Power11 has 16 CPU cores with 2 MByte Level 2 cache per core and a total of 128 MByte Level 3 cache. Thanks to eight simultaneous multithreading (SMT), each core can still process eight threads simultaneously (128 in total). The largest Power11 servers E1180 use 16 processors, divided into four systems with four CPU versions each.

In the Power10, IBM deactivated the 16th CPU core to increase the production yield. This meant that processors with exposure defects could be used in one core.

This is no longer necessary in the Power11, although only 15 cores are active ex works. The 16th core only kicks in as a replacement if problems occur in another core. IBM calls this a spare core.

Table with the specifications for Power11 against Power10

IBM’s specifications for Power11 against its predecessors Power10 and Power9.

(Image: IBM)

More AI

There are improvements in AI skills, among other things. Each CPU core integrates four improved Matrix Math Accelerators (MMAs), which are designed to support various AI algorithms. IBM intends them for the execution of fully trained AI models (inference), such as fraud detection, text extraction, document analysis, domain matching, pattern recognition, prediction, and image/video/audio processing.

For more computing power, Power11 servers support IBM’s own AI computing accelerator Spyre, which was previously only intended for mainframes.

DDIMMs for up to 64 TByte RAM

The biggest leap for Power11 processors is in memory. They can handle IBM’s self-developed DDIMMs, which achieve a higher capacity than typical RDIMMs. They are also designed to increase uptime with additional memory chips and voltage converters in the event of defects. The largest E1180 server comes with 256 x 256 GByte for a total of 64 TByte DDR5 RAM. In principle, the Power11 CPUs can also handle DDR4 bars, but this only works under strict product policy requirements for Power10 upgrades.

The connection between RAM and CPU is made via the Open Memory Interface (OMI). The standard has largely been discontinued as the Compute Express Link (CXL) is gaining acceptance in data centers.

A hand pulls a DDIMM bar out of an IBM server

The largest DDIMMs with 256 GB of memory for IBM’s Power11 systems.

(Image: IBM)

Again with 7-nanometer technology

IBM is sticking with a 7 nm production process from Samsung’s manufacturing division, albeit in an improved version compared to the Power10 CPUs. In a comparison table, the company states that a Power11 chip is 654 mm² in size and contains around 30 billion transistors.

This means that a Power11 processor would be significantly more densely packed than a Power10 with 18 billion transistors on 602 mm². Elsewhere, IBM gives the same key figures for both generations; we have asked for clarification,

For the first time, IBM wants to offer high-end, mid-range and entry servers as well as Power Virtual Servers in its cloud right at the start of a new Power generation. These include the server models E1180, E1150, S1124 and S1122. Delivery is scheduled to begin at the end of July.

(mma)

Don’t miss any news – follow us on
Facebook,
LinkedIn or
Mastodon.

This article was originally published in

German.

It was translated with technical assistance and editorially reviewed before publication.

Dieser Link ist leider nicht mehr gültig.

Links zu verschenkten Artikeln werden ungültig,
wenn diese älter als 7 Tage sind oder zu oft aufgerufen wurden.

Sie benötigen ein heise+ Paket, um diesen Artikel zu lesen. Jetzt eine Woche unverbindlich testen – ohne Verpflichtung!



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleElon Musk’s xAI launches Grok 4 alongside a $300 monthly subscription
Next Article OpenAI to Launch AI Web Browser
Advanced AI Editor
  • Website

Related Posts

IBM Wins Quantum Valley Bid

July 11, 2025

IBM’s business in China remains strong and commits to continued growth for next 40 years

July 11, 2025

IBM will merge two South Bay research hubs into one San Jose tech site

July 11, 2025

Comments are closed.

Latest Posts

Homeland Security Targets Chicago’s National Museum of Puerto Rican Arts & Culture

1,600-Year-Old Tomb of Mayan City’s Founding King Discovered in Belize

Centre Pompidou Cancels Caribbean Art Show, Raising Controversy

‘Night at the Museum’ Reboot in the Works

Latest Posts

Build a conversational data assistant, Part 1: Text-to-SQL with Amazon Bedrock Agents

July 12, 2025

AWS unveils custom GPU cooling system for Nvidia AI servers

July 12, 2025

Google Acquires AI Coding Tech from Windsurf in $2.4 Billion Deal

July 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Build a conversational data assistant, Part 1: Text-to-SQL with Amazon Bedrock Agents
  • AWS unveils custom GPU cooling system for Nvidia AI servers
  • Google Acquires AI Coding Tech from Windsurf in $2.4 Billion Deal
  • OpenAI delays launch of open model again, cites safety concerns
  • A new paradigm for AI: How ‘thinking as optimization’ leads to better general-purpose models

Recent Comments

  1. Compte Binance on Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit
  2. Index Home on Artists Through The Eyes Of Artists’ At Pallant House Gallery
  3. código binance on Five takeaways from IBM Think 2025
  4. Dang k'y binance on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta
  5. "oppna binance-konto on Trump crypto czar Sacks stablecoin bill unlock trillions for Treasury

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.