Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

$750 Target Stays as Analysts Expect AI Gaps to Close

A.I. May Be the Future, but First It Has to Study Ancient Roman History

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Finance AI

AI industry ‘timelines’ to human-like AGI are getting shorter. But AI safety is getting increasingly short shrift

By Advanced AI EditorApril 15, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Hello and welcome to Eye on AI. In this edition…AGI timelines are getting shorter but so is the amount of attention AI labs seem to be paying to AI safety…Venture capital enthusiasm for OpenAI alums’ startups shows no sign of waning….A way to trace LLM outputs back to their source…and the military looks to LLMs for decision support, alarming humanitarian groups.

“Timelines” is a short-hand term AI researchers use to describe how soon they think we’ll achieve artificial general intelligence, or AGI. While its definition is contentious, AGI is basically an AI model that performs as well as or better than humans at most tasks. Many people’s timelines are getting alarmingly short. Former OpenAI policy researcher Daniel Kokotajlo and a group of forecasters with excellent track records have gotten a lot of attention for authoring a detailed scenario, called AI 2027, that suggests AGI will be achieved in, you guessed it, 2027. They argue this will lead to a sudden “intelligence explosion” as AI systems begin building and refining themselves, rapidly leading to superintelligent AI.

Dario Amodei, the cofounder and CEO of AI company Anthropic, thinks we’ll hit AGI by 2027 too. Meanwhile, OpenAI cofounder and CEO Sam Altman is cagey, trying hard not to be pinned down on a precise year, but he’s said his company “knows how to build AGI”—it is just a matter of executing—and that “systems that start to point to AGI are coming into view.” Demis Hassabis, the Google DeepMind cofounder and CEO, has a slightly longer timeline—five to 10 years—but researchers at his company just published a report saying it’s “plausible” AGI will be developed by 2030.

The implications of short timelines for policy are profound. For one thing, if AGI really is coming in two to five years, it gives all of us—companies, society, and governments—precious little time to prepare. While I have previously predicted AI won’t lead to mass unemployment, my view is predicated on the idea that AGI will not be achieved in the next five years. If AGI does arrive sooner, it could indeed lead to large job losses as many organizations would be tempted to automate roles, and two years is not enough time to allow people to transition to new ones.

If timelines are short, safety should matter more

Another implication of short timelines is that AI safety and security ought to become more important. (The Google DeepMind researchers, in their latest AI safety paper, said AGI could lead to severe consequences, including the “permanent end of humanity.”)

Jack Clark, a cofounder at Anthropic who heads its policy team, wrote in his personal newsletter, Import AI, a few weeks ago that short timelines called for “more extreme” policy actions. These, he wrote, would include increased security at leading AI labs, mandatory pre-deployment safety testing by third-parties (moving away from the current voluntary system), and spending more time talking about—and maybe even demonstrating—dangerous misuses of advanced AI models in order to convince policymakers to take stronger regulatory action.

Companies are paying less attention to safety

But, contrary to Clark’s position, even as timelines have shortened, many AI companies seem to be paying less, not more, attention to AI safety. For instance, last week, my Fortune colleague Bea Nolan and I reported that Google released its latest Gemini 2.5 Pro model without a key safety report, in apparent violation of commitments the company had made to the U.S. government in 2023 and at various international AI safety summits. And Google is not alone—OpenAI also released its DeepResearch model without the safety report, called a “system card,” publishing one only months later. The Financial Times also reported this week that OpenAI has been slashing the time it allows both internal and third-party safety evaluators to test its models before release, in some cases giving testers just a few days for evaluations that had previously been allotted weeks or months to be completed. Meanwhile, AI safety experts criticized Meta for publishing a system card for its new Llama 4 model family that provided only barebones information on the models’ potential risks.

The reason safety is getting short shrift is clear: Competition between AI companies is intense and those companies perceive safety testing as an impediment to speeding new models to market. The closer AGI appears to be, the more bitterly fought the race to get there first will be.

The U.S. government sees safety as an impediment to beating China

In economic terms, this is a market failure—the commercial incentives of private actors encourage them to do things that are bad for the collective whole. Normally, when there are market failures, it would be reasonable to expect the government to step in. But in this case, geopolitics gets in the way. The U.S. sees AGI as a strategic technology that it wants to obtain before any rival, particularly China. So it is unlikely to do anything that might slow the progress of the U.S. AI labs—even a little bit. (It doesn’t help that AI lab CEOs such as Altman—who once went before Congress and endorsed the idea of government regulation, including possible licensing requirements for leading AI labs, but now says he thinks AI companies can self-regulate on AI safety—are lobbying the government to eschew any legal requirements.)

Of course, having unsafe, uncontrollable AI would be in neither Washington nor Beijing’s interest. So there might be scope for an international treaty. But given the lack of trust between the Trump administration and Xi Jinping, that seems unlikely. It is possible President Trump may yet come around on AI regulation—if there’s a populist outcry over AI-induced job losses or a series of damaging, but not catastrophic, AI-involved disasters. Otherwise, I guess we just have to hope the AI companies’ timelines are wrong.

With that, here’s the rest of this week’s AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news, if you’re interested in learning more about how AI will impact your business, the economy, and our societies (and given that you’re reading this newsletter, you probably are), please consider joining me at the Fortune Brainstorm AI London 2025 conference. The conference is being held May 6–7 at the Rosewood Hotel in London. Confirmed speakers include Mastercard chief product officer Jorn Lambert, eBay chief AI officer Nitzan Mekel, Sequoia partner Shaun Maguire, noted tech analyst Benedict Evans, and many more. I’ll be there, of course. I hope to see you there too. You can apply to attend here.

And if I miss you in London, why not consider joining me in Singapore on July 22–23 for Fortune Brainstorm AI Singapore. You can learn more about that event here.

This story was originally featured on Fortune.com



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGemini Advanced, Whisk users pick up Veo 2 for shareable cinematic video clips
Next Article AI Video Generator Market is Anticipated to Grow From US$ 793.49
Advanced AI Editor
  • Website

Related Posts

I sat in on an AI training session at KPMG. It was almost like being back at journalism school.

July 26, 2025

How AI is transforming the lives of neurodivergent people

July 26, 2025

Huawei shows off AI computing system to rival Nvidia’s top product

July 26, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

$750 Target Stays as Analysts Expect AI Gaps to Close

July 27, 2025

A.I. May Be the Future, but First It Has to Study Ancient Roman History

July 27, 2025

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • $750 Target Stays as Analysts Expect AI Gaps to Close
  • A.I. May Be the Future, but First It Has to Study Ancient Roman History
  • OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News
  • This Indian With IIT, MIT Degree Could Have Received Rs 800 Crore Joining Bonus Ast Meta! – Trak.in
  • Beijing Is Using Soft Power to Gain Global Dominance

Recent Comments

  1. Rejestracja on Online Education – How I Make My Videos
  2. Anonymous on AI, CEOs, and the Wild West of Streaming
  3. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  4. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.