Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

DeepSeek-R1 More Effective in Diagnosis, Management of Ophthalmic Subspecialties Compared With OpenAI

OpenAI and Oracle strike $300B cloud computing deal to power AI

Which Tech Stock Deserves a Spot in Your Portfolio Now?

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Zhipu AI

AI giants ‘fundamentally unprepared’ for dangers of human level intelligence

By Advanced AI EditorJuly 18, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The world’s leading artificial intelligence (AI) companies are hurtling toward the creation of human-level AI – but without a credible safety net.

Top AI developers of being “fundamentally unprepared” for the consequences of the very systems they are racing to build, warned the Future of Life Institute (FLI).

In a recent report, the US-based AI safety non-profit revealed that none of the seven major AI labs, including OpenAI, Google Deepmind, Anthropic, xAI and Chinese firms Deepseek and Zhipu AI – scored higher than a D on its “existential safety” index.

That score reflects how seriously firms are preparing for the possibility of creating artificial general intelligence (AGI), which are systems of matching or exceeding human performance across virtually all intellectual tasks.

Anthropic earned the top grade, albeit just a C+, followed by OpenAI (C) and Google Deepmind (C-).

But no firm received a passing mark in planning for existential risks, which include catastrophic failures where AI could spiral out of human control.

Max Tegmark, FLI co-founder, likened the situation to “building a gigantic nuclear power plant in New York City set to open next week – but there is no plan to prevent it having a meltdown”.

A Google Deepmind spokesperson claimed: “These recent reports don’t take into account all of Google DeepMind’s AI safety efforts, nor all of the industry benchmarks. Our comprehensive approach to AI safety and security extends well beyond what’s captured.”

The criticism lands at a pivotal moment, with AI development surging ahead with increasingly human-like capabilities, driven by breakthroughs in brain-inspired architecture and emotional modelling.

Just last month, researchers at the University of Geneva found that large language models such as ChatGPT 4, Claude 3.5, and Google’s Gemini 1.5 outperformed humans in tests of emotional intelligence.

And yet, these seemingly human qualities mask a deep vulnerability in their lack of transparency, control, and understanding.

FLI’s findings come just months after the UK’s AI safety summit in Paris, which called for international cooperation to ensure the safe development of AI.

Since then, powerful new models like xAI’s Grok 4 and Google’s Veo3 have pushed the boundaries of what AI can do without, FLI warns, a matching push in risk mitigation.

SaferAI, another watchdog, released its own findings alongside FLI’s, labelling the current safety regimes at top AI companies as “weak to very weak,” and calling the industry’s approach “unacceptable.”

“The companies say AGI could be just a few years away,” Tegmark said. “But they still have no coherent, actionable safety strategy. That should worry everyone.”

Story Continues

AGI, the so-called ‘holy grail’ of AI, has long been seen as decades away. But recent advancements suggest it may be closer than many assumed.

Adding complexity to AI networks – via ‘height’ in addition to width and depth -could reportedly produce more intuitive, stable and humanlike systems.

This design leap, pioneered by researchers at Rensselaer Polytechnic Institute and City University of Hong Kong, uses feedback loops and intra-layer links to mimic the brain’s local neural circuits.

Such changes could move AI beyond transformer architecture, the 2017 breakthrough that gave rise to today’s large language models.

Ge Wang, one of the authors, described the shift as akin to adding a third dimension to a city map: “You’re not just adding more streets or buildings”, he said, “you’re connecting rooms inside the same structure in new ways. That allows for richer, more stable reasoning, closer to how humans think.”

These innovations could drive the next AI revolution, and could also open doors to understanding the human brain itself, with implications for treating neurological disorders and exploring cognition. But with this power comes escalating risk.

The AI firms listed have been approached for comment.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIn the News: Rachel Moran on Minnesota Juvenile Imprisonment – Newsroom
Next Article ServiceNow to pay $2.85B for Moveworks’ AI tools
Advanced AI Editor
  • Website

Related Posts

Anthropic’s Claude restrictions put overseas AI tools backed by China in limbo

September 7, 2025

Chinese start-up Zhipu AI raises US$412 million in new funding amid crowded market

August 25, 2025

China’s Zhipu AI Raises $340 Million to Fuel AI Advancements Amid Chip Restrictions

August 22, 2025

Comments are closed.

Latest Posts

Christie’s Will Auction The First Calculating Machine In History

Ralph Rugoff to Leave London’s Hayward Gallery After 20 Years

New York Foundation for the Arts Workers Move to Unionize

Patrizia Sandretto Re Rebaudengo Teams Up with New Museum

Latest Posts

DeepSeek-R1 More Effective in Diagnosis, Management of Ophthalmic Subspecialties Compared With OpenAI

September 11, 2025

OpenAI and Oracle strike $300B cloud computing deal to power AI

September 11, 2025

Which Tech Stock Deserves a Spot in Your Portfolio Now?

September 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • DeepSeek-R1 More Effective in Diagnosis, Management of Ophthalmic Subspecialties Compared With OpenAI
  • OpenAI and Oracle strike $300B cloud computing deal to power AI
  • Which Tech Stock Deserves a Spot in Your Portfolio Now?
  • Anthropic reports outages, Claude and Console impacted
  • 7 Questions To Ask Legal Tech Vendors Today – Artificial Lawyer

Recent Comments

  1. find more on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Rogerelose on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. Michael on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. online casino bonuses on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. petamkelay on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.