Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

AI LAWSUIT ALERT: Levi & Korsinsky Notifies C3.ai, Inc. Investors of a Class Action Lawsuit and Upcoming Deadline

Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn Dialogue – Takara TLDR

Google DeepMind expands frontier AI safety framework to counter manipulation and shutdown risks

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

AGI Is Coming, Society’s Not Ready

By Advanced AI EditorApril 24, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Google DeepMind CEO Demis Hassabis has warned that society is not ready for human-level artificial intelligence (AI), popularly referred to as Artificial General Intelligence (AGI). In an interview with Time, Mr Hassabis was quizzed about what keeps him up at night, to which he talked about AGI, which was in the final steps of becoming reality.

The 2024 Nobel Prize in Chemistry winner said AI systems capable of human-level cognitive abilities were only five to ten years away.

“For me, it’s this question of international standards and cooperation and also not just between countries, but also between companies and researchers as we get towards the final steps of AGI. And I think we are on the cusp of that. Maybe we are five to 10 years out. Some people say shorter, I wouldn’t be surprised,” said Mr Hassabis.

“It’s a sort of like probability distribution. But it’s coming, either way it’s coming very soon and I’m not sure society’s quite ready for that yet. And we need to think that through and also think about these issues that I talked about earlier, to do with the controllability of these systems and also the access to these systems and ensuring that all goes well,” he added.

Sir Demis Hassabis on what keeps him up at night: As we approach “the final steps toward AGI,” safety still matters — but it’s coordination that haunts him.

“It’s coming… and I’m not sure society’s ready.”

How will countries, companies, and labs align before it’s too late? pic.twitter.com/W0WZRmcaM8

— vitrupo (@vitrupo) April 23, 2025

This is not the first instance when Mr Hassabis has warned about the perils of AGI. He has previously batted for a UN-like umbrella organisation to oversee AGI’s development.

“I would advocate for a kind of CERN for AGI, and by that, I mean a kind of international research focused high-end collaboration on the frontiers of AGI development to try and make that as safe as possible,” said Mr Hassabis in February.

“You would also have to pair it with a kind of an institute like IAEA, to monitor unsafe projects and sort of deal with those. And finally, some kind of supervening body that involves many countries around the world. So a kind of like UN umbrella, something that is fit for purpose for that, a technical UN,” he added.

Also Read | Google Orders Remote Workers To Return To Office Or Risk Losing Their Jobs

AGI could destroy humanity

The assessment by the Google executive comes in the backdrop of DeepMind publishing a research paper earlier this month, warning that AGI may “permanently destroy humanity”.

“Given the massive potential impact of AGI, we expect that it too could pose potential risk of severe harm,” the study highlighted, adding that existential risks that “permanently destroy humanity” are clear examples of severe harm.

What is AGI?

AGI takes AI a step further. While AI is task-specific, AGI aims to possess intelligence that can be applied across a wide range of tasks, similar to human intelligence. In essence, AGI would be a machine with the ability to understand, learn, and apply knowledge in diverse domains, much like a human being.





Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI’s Guide to Building Scalable and Secure AI Agents
Next Article Claude AI has a moral code, Anthropic study finds
Advanced AI Editor
  • Website

Related Posts

Google DeepMind expands frontier AI safety framework to counter manipulation and shutdown risks

September 22, 2025

Google DeepMind Updates AI Safety Rules to Counter ‘Harmful Manipulation’ and Models That Resist Shutdown

September 22, 2025

Former Google DeepMind Core Developer Joins xAI to Assist in Grok Development_Tran_the_his

September 22, 2025
Leave A Reply

Latest Posts

St. Patrick’s Cathedral Unveils Monumental Mural by Adam Cvijanovic

Three Loaned Banksy Works Incite Dispute Between England and Italy

Major Collection of Old Masters Paintings Could Be Fractionalized

100 Must-See Artworks at the Metropolitan Museum of Art

Latest Posts

AI LAWSUIT ALERT: Levi & Korsinsky Notifies C3.ai, Inc. Investors of a Class Action Lawsuit and Upcoming Deadline

September 22, 2025

Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn Dialogue – Takara TLDR

September 22, 2025

Google DeepMind expands frontier AI safety framework to counter manipulation and shutdown risks

September 22, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • AI LAWSUIT ALERT: Levi & Korsinsky Notifies C3.ai, Inc. Investors of a Class Action Lawsuit and Upcoming Deadline
  • Ask-to-Clarify: Resolving Instruction Ambiguity through Multi-turn Dialogue – Takara TLDR
  • Google DeepMind expands frontier AI safety framework to counter manipulation and shutdown risks
  • OPM adds OpenAI to its employees’ computers
  • Can AI Help Invent the Next Superconductor? MIT and Samsung Researchers Think So

Recent Comments

  1. Brentelorm on Sora, OpenAI’s video generator, has hit the UK. It’s obvious why creatives are worried | Artificial intelligence (AI)
  2. zestycricket8Nalay on Ballet Tech Forms The Future Through Dance
  3. shiningcrown on Sora, OpenAI’s video generator, has hit the UK. It’s obvious why creatives are worried | Artificial intelligence (AI)
  4. evasolar-849 on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. zestycricket8Nalay on Marc Raibert: Boston Dynamics and the Future of Robotics | Lex Fridman Podcast #412

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.