Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Uphold ethical standards in fashion using multimodal toxicity detection with Amazon Bedrock Guardrails

Czech government bans China’s DeepSeek AI, warns of security risks

OpenAI Delaying Open-Weight Model to Run Safety Tests: Sam Altman

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

The dangers of so-called AI experts believing their own hype

By Advanced AI EditorJuly 3, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


New Scientist. Science news and long reads from expert journalists, covering developments in science, technology, health and the environment on the website and the magazine.

Demis Hassabis, CEO of Google DeepMind and a Nobel prizewinner for his role in developing the AlphaFold AI algorithm for predicting protein structures, made an astonishing claim on the 60 Minutes show in April. With the help of AI like AlphaFold, he said, the end of all disease is within reach, “maybe within the next decade or so”. With that, the interview moved on.

To those actually working on drug development and curing disease, this claim is laughable. According to medicinal chemist Derek Lowe, who has worked for decades on drug discovery, Hassabis’s statements “make me want to spend some time staring silently out the window, mouthing unintelligible words to myself”. But you don’t need to be an expert to recognise the hyperbole: the idea that all disease will be ended in around a decade is absurd.

Some have suggested that Hassabis’s remarks are just another example of tech leaders overpromising, perhaps to attract investors and funding. Isn’t this just like Elon Musk making silly forecasts about Martian colonies, or OpenAI’s Sam Altman claiming that artificial general intelligence (AGI) is just around the corner? But while that cynical view may have some validity, it lets these experts off the hook and underestimates the problem.

It is one thing when seeming authorities make grand claims outside their area of expertise (see Stephen Hawking on AI, aliens and space travel). But it might appear as if Hassabis is staying in his lane here. His Nobel citation mentions new pharmaceuticals as a potential benefit of AlphaFold’s predictions, and the algorithm’s release was accompanied by endless media headlines about revolutionising drug discovery.

Likewise, when his fellow 2024 Nobel laureate Geoffrey Hinton, formerly an AI adviser with Google, claimed that the large language models (LLMs) he helped create work in a way that resembles human learning, he seemed to be speaking from deep knowledge. So never mind the cries of protest from those researching human cognition – and, in some cases, on AI too.

What such instances seem to reveal is that, weirdly, some of these AI experts appear to mirror their products: they are able to produce remarkable results while having an understanding of them that is, at best, skin deep and brittle.

Here is another example: Daniel Kokotajlo, a researcher who quit OpenAI over concerns about its work towards AGI and is now executive director of the AI Futures Project in California, has said: “We’re catching our AIs lying, and we’re pretty sure they knew that the thing they were saying was false.” His anthropomorphic language of knowledge, intentions and deceit shows Kokotajlo has lost sight of what LLMs really are.

The dangers of supposing these experts know best are exemplified in Hinton’s comment in 2016 that, thanks to AI, “people should stop training radiologists now”. Luckily, experts in radiology didn’t believe him, although some suspect a link between his remark and growing concerns from medical students about job prospects in radiology. Hinton has since revised that claim – but imagine how much more force it would have had if he had already been given the Nobel. The same applies to Hassabis’s comments on disease: the idea that AI will do the heavy lifting could engender complacency, when we need the exact opposite, both scientifically and politically.

These “expert” prophets tend to get very little pushback from the media, and I can personally attest that even some smart scientists believe them. Many government leaders also give the impression they have swallowed the hype of tech CEOs and Silicon Valley gurus. But I recommend we start treating their pronouncements like those of LLMs themselves, meeting their superficial confidence with scepticism until fact-checked.

Philip Ball is a science writer based in London. His latest book is How Life Works

Topics:

artificial intelligence/technology



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI signs $30bn data centre deal with Oracle
Next Article Meta AI Copyright Case: Judge Rules Training AI on Books is Fair Use, But Pirating Them Isn’t
Advanced AI Editor
  • Website

Related Posts

Google Acquires AI Coding Tech from Windsurf in $2.4 Billion Deal

July 12, 2025

OpenAI’s Windsurf deal is off — and Windsurf’s CEO is going to Google

July 12, 2025

Windsurf’s CEO goes to Google; OpenAI’s acquisition falls apart

July 11, 2025
Leave A Reply

Latest Posts

Homeland Security Targets Chicago’s National Museum of Puerto Rican Arts & Culture

1,600-Year-Old Tomb of Mayan City’s Founding King Discovered in Belize

Centre Pompidou Cancels Caribbean Art Show, Raising Controversy

‘Night at the Museum’ Reboot in the Works

Latest Posts

Uphold ethical standards in fashion using multimodal toxicity detection with Amazon Bedrock Guardrails

July 12, 2025

Czech government bans China’s DeepSeek AI, warns of security risks

July 12, 2025

OpenAI Delaying Open-Weight Model to Run Safety Tests: Sam Altman

July 12, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Uphold ethical standards in fashion using multimodal toxicity detection with Amazon Bedrock Guardrails
  • Czech government bans China’s DeepSeek AI, warns of security risks
  • OpenAI Delaying Open-Weight Model to Run Safety Tests: Sam Altman
  • Varun Mohan education qualifications: How an Indian-origin MIT engineer secured Google’s $2.4 billion AI investment
  • TU Wien Rendering #6 – Snell’s Law and Total Internal Reflection

Recent Comments

  1. código de indicac~ao binance on [2505.13511] Can AI Freelancers Compete? Benchmarking Earnings, Reliability, and Task Success at Scale
  2. Compte Binance on Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit
  3. Index Home on Artists Through The Eyes Of Artists’ At Pallant House Gallery
  4. código binance on Five takeaways from IBM Think 2025
  5. Dang k'y binance on Qwen 2.5 Coder and Qwen 3 Lead in Open Source LLM Over DeepSeek and Meta

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.