Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI Research Finds That Even Its Best Models Give Wrong Answers a Wild Proportion of the Time

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

MIT CSAIL researchers develop tool for creating domain-specific languages

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » AI leaders warn the technology poses ‘risk of extinction’ like pandemics and nuclear war
Center for AI Safety

AI leaders warn the technology poses ‘risk of extinction’ like pandemics and nuclear war

Advanced AI BotBy Advanced AI BotMay 24, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Hundreds of business leaders and public figures sounded a sobering alarm on Tuesday over what they described as the threat of mass extinction posed by artificial intelligence.

Among the 350 signatories of the public statement are Sam Altman, the chief executive of OpenAI, the company behind the popular conversation bot ChatGPT; and Demis Hassabis, the CEO of Google DeepMind, the tech giant’s AI division.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the one-sentence statement released by the San Francisco-based nonprofit Center for AI Safety.

Supporters of the statement also feature a range of figures like musician Grimes, environmental activist Bill McKibben and neuroscience author Sam Harris.

Concern about the risks posed by AI and calls for forceful regulation of the technology have drawn greater attention in recent months in response to major breakthroughs like ChatGPT.

In testimony before the Senate two weeks ago, Altman warned lawmakers: “If this technology goes wrong, it can go quite wrong.”

OpenAI CEO Sam Altman addresses a speech during a meeting, at the Station F in Paris, May 26, 2023.

Joel Saget/AFP via Getty Images, FILE

“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he added, suggesting the adoption of licenses or safety requirements necessary for the operation of AI models.

Like other AI-enabled chat bots, ChatGPT can immediately respond to prompts from users on a wide range of subjects, generating an essay on Shakespeare or a set of travel tips for a given destination.

Microsoft launched a version of its Bing search engine in March that offers responses delivered by GPT-4, the latest model of ChatGPT. Rival search company Google in February announced an AI model called Bard.

The rise of vast quantities of AI-generated content has raised fears over the potential spread of misinformation, hate speech and manipulative responses.

Hundreds of tech leaders, including billionaire entrepreneur Elon Musk and Apple co-founder Steve Wozniak, signed an open letter in March calling for a six-month pause in the development of AI systems and a major expansion of government oversight.

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter said.

In comments last month to Fox News host Tucker Carlson, Musk raised further alarm: “There’s certainly a path to AI dystopia, which is to train AI to be deceptive.”

Tesla CEO Elon Musk in Washington, U.S., Jan. 27, 2023.

Jonathan Ernst/Reuters, FILE

The statement released on Tuesday included other major backers from the AI industry, including Microsoft Chief Technology Officer Kevin Scott and OpenAI Head of Policy Research Miles Brundage.

Addressing the brevity of the 22-word statement released on Tuesday, the Center for AI Safety said on its website: “It can be difficult to voice concerns about some of advanced AI’s most severe risks.”

“The succinct statement below aims to overcome this obstacle and open up discussion,” the Center for AI Safety added.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTime to Hold or Sell the Stock?
Next Article OM1’s PhenOM® Foundation AI Surpasses One Billion Years of Health History in Model Training
Advanced AI Bot
  • Website

Related Posts

AI leaders warn the technology poses ‘risk of extinction’ like pandemics and nuclear war

May 24, 2025

AI leaders warn the technology poses ‘risk of extinction’ like pandemics and nuclear war

May 24, 2025

Exclusive: AI Bests Virus Experts, Raising Biohazard Fears

May 24, 2025
Leave A Reply Cancel Reply

Latest Posts

Pro-Palestine Protests Disrupt Whitney Free Friday Event

Peter Murphy Finds ‘Clarity in Chaos’ on New Solo Album Silver Shade

Documentary Photographer Dies at 81

Art Historian Protests Restoration of Monument Graffitied in 2020

Latest Posts

OpenAI Research Finds That Even Its Best Models Give Wrong Answers a Wild Proportion of the Time

May 24, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

May 24, 2025

MIT CSAIL researchers develop tool for creating domain-specific languages

May 24, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.