The world’s leading artificial intelligence (AI) companies are hurtling toward the creation of human-level AI – but without a credible safety net.
Top AI developers of being “fundamentally unprepared” for the consequences of the very systems they are racing to build, warned the Future of Life Institute (FLI).
In a recent report, the US-based AI safety non-profit revealed that none of the seven major AI labs, including OpenAI, Google Deepmind, Anthropic, xAI and Chinese firms Deepseek and Zhipu AI – scored higher than a D on its “existential safety” index.
That score reflects how seriously firms are preparing for the possibility of creating artificial general intelligence (AGI), which are systems of matching or exceeding human performance across virtually all intellectual tasks.
Anthropic earned the top grade, albeit just a C+, followed by OpenAI (C) and Google Deepmind (C-).
But no firm received a passing mark in planning for existential risks, which include catastrophic failures where AI could spiral out of human control.
Max Tegmark, FLI co-founder, likened the situation to “building a gigantic nuclear power plant in New York City set to open next week – but there is no plan to prevent it having a meltdown”.
A Google Deepmind spokesperson claimed: “These recent reports don’t take into account all of Google DeepMind’s AI safety efforts, nor all of the industry benchmarks. Our comprehensive approach to AI safety and security extends well beyond what’s captured.”
The criticism lands at a pivotal moment, with AI development surging ahead with increasingly human-like capabilities, driven by breakthroughs in brain-inspired architecture and emotional modelling.
Just last month, researchers at the University of Geneva found that large language models such as ChatGPT 4, Claude 3.5, and Google’s Gemini 1.5 outperformed humans in tests of emotional intelligence.
And yet, these seemingly human qualities mask a deep vulnerability in their lack of transparency, control, and understanding.
FLI’s findings come just months after the UK’s AI safety summit in Paris, which called for international cooperation to ensure the safe development of AI.
Since then, powerful new models like xAI’s Grok 4 and Google’s Veo3 have pushed the boundaries of what AI can do without, FLI warns, a matching push in risk mitigation.
SaferAI, another watchdog, released its own findings alongside FLI’s, labelling the current safety regimes at top AI companies as “weak to very weak,” and calling the industry’s approach “unacceptable.”
“The companies say AGI could be just a few years away,” Tegmark said. “But they still have no coherent, actionable safety strategy. That should worry everyone.”
Story Continues