Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Former Google CEO suggests building data centers in remote locations in case of nation-state attacks to slow down AI
Center for AI Safety

Former Google CEO suggests building data centers in remote locations in case of nation-state attacks to slow down AI

Advanced AI BotBy Advanced AI BotApril 4, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


A new paper co-authored by the former CEO of Google has outlined a future where AI training data centers could be blown up by foreign nations.

Eric Schmidt, along with Scale AI CEO Alexandr Wang and the Center for AI Safety’s Dan Hendrycks, warned that “destabilizing AI developments could rupture the balance of power and raise the odds of great-power conflict.”

The paper lays out the concept of Mutual Assured AI Malfunction (MAIM), modeled on nuclear mutual assured destruction (MAD), where any “aggressive bid for unilateral AI dominance is met with preventive sabotage by rivals.”

This could involve espionage, cyberattacks, or kinetic strikes on data centers and their supporting infrastructure and supply chain, the authors argue.

“Well-placed or blackmailed insiders can tamper with model weights or training data or AI chip fabrication facilities, while hackers quietly degrade the training process so that an AI’s performance when it completes training is lackluster,” they state.

“When subtlety proves too constraining, competitors may escalate to overt cyberattacks, targeting data center chip-cooling systems or nearby power plants in a way that directly – if visibly – disrupts development. Should these measures falter, some leaders may contemplate kinetic attacks on data centers, arguing that allowing one actor to risk dominating or destroying the world are graver dangers, though kinetic attacks are likely unnecessary.”

With kinetic attacks a possibility, the authors suggest building data centers in remote locations to minimize collateral damage. For those looking to damage other nations’ efforts, they recommend first trying the cyber route: “States could also poison data, corrupt model weights and gradients, disrupt software that handles faulty GPUs… training runs are non-deterministic and their outcomes are difficult to predict even without bugs, providing cover to many cyberattacks.”

Following similar lessons with nuclear weapons, the authors believe that some of the attacks could be avoided with more transparency. Distinguishing between destabilizing AI projects and acceptable use facilities could prevent consumer AI data centers from being targeted.

Similarly, AI-assisted inspections could be used to confirm that AI projects “abide by declared constraints without revealing proprietary code or classified material.”

While intangible aspects like algorithms and data are hard to control, semiconductors are physical assets, giving nations power over their production and distribution.

The authors call for better tracking on every high-end AI chip sales, and more enforcement officers to ensure that chips actually go where they are meant to – instead of, for example, being secretly diverted to China. “To assist enforcement officers, tamper-evident camera feeds from data centers can confirm that declared AI chips remain on-site, exposing any smuggling.”

Any chip that is said to be inoperable or obsolete would have to undergo verified decommissioning, much like the disposal of chemical or nuclear materials, so that it doesn’t get resold on the black market.

The chip industry could put in firmware level protections, including having the chips deactivate if they are in the wrong country. Similarly, they could require periodic authorization, or have restrictions on how many chips they can be networked with.

While the US could enforce some of these restrictions (something that seems unlikely with the current administration), “the dependence on Taiwan for advanced AI chips presents a critical vulnerability” for America.

“A blockade or invasion may spell the end of the West’s advantage in AI,” the authors said. “To mitigate this foreseeable risk, Western countries should develop guaranteed supply chains for AI chips. Though this requires considerable investment, it is potentially necessary for national competitiveness.”

The Biden-era US Chips Act aimed to fund such development, but is being dismantled by President Trump, who favors tariffs as an incentive structure.

If any nation achieves superintelligence first, the impact could be profound, the authors claim. “Superintelligence is not merely a new weapon, but a way to fast-track all future military innovation. A nation with sole possession of superintelligence might be as overwhelming as the Conquistadors were to the Aztecs.

“If a state achieves a strategic monopoly through AI, it could reshape world affairs on its own terms. An AI-driven surveillance apparatus might enable an unshakable totalitarian regime, transforming governance at home and leverage abroad.”

While the US and China are locked in a great power struggle, the paper again turns to the Cold War to find common ground: Avoiding dangerous weapons ending up in the hands of terrorists.

“AI holds the potential to reshape the balance of power. In the hands of state actors, it can lead to disruptive military capabilities and transform economic competition. At the same time, terrorists may exploit its dual-use nature to orchestrate attacks once within the exclusive domain of great powers. It could also slip free of human oversight.”

Avoiding this will require the world’s governments to work together to track and limit AI development.

“The United States cooperated with both the Soviet Union and China on nuclear, biological, and chemical arms not from altruism but from self-preservation. If the US begins to treat advanced AI chips like fissile material, it may likewise encourage China to do the same.”



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleJudge calls out OpenAI’s “straw man” argument in New York Times copyright suit
Next Article SimScale Unveils the World’s First Foundation AI Model for Centrifugal Pump Simulation Built with NVIDIA PhysicsNeMo
Advanced AI Bot
  • Website

Related Posts

A New Trick Could Block the Misuse of Open Source AI

June 5, 2025

A New Trick Could Block the Misuse of Open Source AI

June 5, 2025

A New Trick Could Block the Misuse of Open Source AI

June 5, 2025
Leave A Reply Cancel Reply

Latest Posts

Why Hollywood Stars Make Bank On Broadway—For Producers

New contemporary art museum to open in Slovenia

Curtain Up On 85 Years Of American Ballet Theatre

Is Quiet Luxury Over? Top Designer André Fu Believes It’s Here To Stay

Latest Posts

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

June 5, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 5, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 5, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.