Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

OpenAI lifts the lid on ChatGPT’s daily prompt count

Latent Labs launches web-based AI model to democratize protein design

5 More ChatGPT Prompts to Add to Your Collection

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
AI Search

AI model collapse is not what we paid for • The Register

By Advanced AI EditorMay 20, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Opinion I use AI a lot, but not to write stories. I use AI for search. When it comes to search, AI, especially Perplexity, is simply better than Google.

Ordinary search has gone to the dogs. Maybe as Google goes gaga for AI, its search engine will get better again, but I doubt it. In just the last few months, I’ve noticed that AI-enabled search, too, has been getting crappier.

search spam

Guide for the perplexed – Google is no longer the best search engine

READ MORE

In particular, I’m finding that when I search for hard data such as market-share statistics or other business numbers, the results often come from bad sources. Instead of stats from 10-Ks, the US Securities and Exchange Commission’s (SEC) mandated annual business financial reports for public companies, I get numbers from sites purporting to be summaries of business reports. These bear some resemblance to reality, but they’re never quite right. If I specify I want only 10-K results, it works. If I just ask for financial results, the answers get… interesting,

This isn’t just Perplexity. I’ve done the exact same searches on all the major AI search bots, and they all give me “questionable” results.

Welcome to Garbage In/Garbage Out (GIGO). Formally, in AI circles, this is known as AI model collapse. In an AI model collapse, AI systems, which are trained on their own outputs, gradually lose accuracy, diversity, and reliability. This occurs because errors compound across successive model generations, leading to distorted data distributions and “irreversible defects” in performance. The final result? A Nature 2024 paper stated, “The model becomes poisoned with its own projection of reality.”

Model collapse is the result of three different factors. The first is error accumulation, in which each model generation inherits and amplifies flaws from previous versions, causing outputs to drift from original data patterns. Next, there is the loss of tail data: In this, rare events are erased from training data, and eventually, entire concepts are blurred. Finally, feedback loops reinforce narrow patterns, creating repetitive text or biased recommendations.

I like how the AI company Aquant puts it: “In simpler terms, when AI is trained on its own outputs, the results can drift further away from reality.”

I’m not the only one seeing AI results starting to go downhill. In a recent Bloomberg Research study of Retrieval-Augmented Generation (RAG), the financial media giant found that 11 leading LLMs, including GPT-4o, Claude-3.5-Sonnet, and Llama-3-8 B, using over 5,000 harmful prompts would produce bad results.

RAG, for those of you who don’t know, enables large language models (LLMs) to pull in information from external knowledge stores, such as databases, documents, and live in-house data stores, rather than relying just on the LLMs’ pre-trained knowledge.

You’d think RAG would produce better results, wouldn’t you? And it does. For example, it tends to reduce AI hallucinations. But, simultaneously, it increases the chance that RAG-enabled LLMs will leak private client data, create misleading market analyses, and produce biased investment advice.

As Amanda Stent, Bloomberg’s head of AI strategy & research in the office of the CTO, explained: “This counterintuitive finding has far-reaching implications given how ubiquitously RAG is used in gen AI applications such as customer support agents and question-answering systems. The average internet user interacts with RAG-based systems daily. AI practitioners need to be thoughtful about how to use RAG responsibly.”

That sounds good, but a “responsible AI user” is an oxymoron. For all the crap about how AI will encourage us to spend more time doing better work, the truth is AI users write fake papers including bullshit results. This ranges from your kid’s high school report to fake scientific research documents to the infamous Chicago Sun-Times best of summer feature, which included forthcoming novels that don’t exist.

What all this does is accelerate the day when AI becomes worthless. For example, when I asked ChatGPT, “What’s the plot of Min Jin Lee’s forthcoming novel ‘Nightshade Market?'” one of the fake novels, ChatGPT confidently replied, “There is no publicly available information regarding the plot of Min Jin Lee’s forthcoming novel, Nightshade Market. While the novel has been announced, details about its storyline have not been disclosed.”

Once more, and with feeling, GIGO.

Some researchers argue that collapse can be mitigated by mixing synthetic data with fresh human-generated content. What a cute idea. Where is that human-generated content going to come from?

Given a choice between good content that requires real work and study to produce and AI slop, I know what most people will do. It’s not just some kid wanting a B on their book report of John Steinbeck’s The Pearl; it’s businesses eager, they claim, to gain operational efficiency, but really wanting to fire employees to increase profits.

Quality? Please. Get real.

We’re going to invest more and more in AI, right up to the point that model collapse hits hard and AI answers are so bad even a brain-dead CEO can’t ignore it.

How long will it take? I think it’s already happening, but so far, I seem to be the only one calling it. Still, if we believe OpenAI’s leader and cheerleader, Sam Altman, who tweeted in February 2024 that “OpenAI now generates about 100 billion words per day,” and we presume many of those words end up online, it won’t take long. ®



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleGetty Villa Will Reopen on June 27 After Palisades Fire Closure
Next Article House Republicans want to stop states from regulating AI. More than 100 organizations are pushing back
Advanced AI Editor
  • Website

Related Posts

Artificial intelligence disrupting job search process for both candidates, companies

July 21, 2025

OpenAI signs deal with UK to find government uses for its models | OpenAI

July 21, 2025

Dig deeper in Google Search with AI Overview and three buttons

July 20, 2025
Leave A Reply

Latest Posts

Ronald Perelman’s $410 Million Art Insurance Trial Begins over Fire-Damaged Works

Nonprofit Files Case Accusing Russia of Plundering Ukrainian Culture

Artist Raymond Saunders Dies at 90

Famous $6.2 M. Banana from Maurizio Cattelan’s ‘Comedian’ Eaten Again

Latest Posts

OpenAI lifts the lid on ChatGPT’s daily prompt count

July 22, 2025

Latent Labs launches web-based AI model to democratize protein design

July 22, 2025

5 More ChatGPT Prompts to Add to Your Collection

July 22, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • OpenAI lifts the lid on ChatGPT’s daily prompt count
  • Latent Labs launches web-based AI model to democratize protein design
  • 5 More ChatGPT Prompts to Add to Your Collection
  • AnyMind Group extends AI customer service agent feature on AnyChat to WhatsApp
  • From Recruiters to EAs: AI may wipe out these roles within six months, warns Perplexity CEO Aravind Srinivas

Recent Comments

  1. Duanepiems on Orange County Museum of Art Discusses Merger with UC Irvine
  2. fpmarkGoods on How Cursor and Claude Are Developing AI Coding Tools Together
  3. avenue17 on Local gov’t reps say they look forward to working with Thomas
  4. Lucky Star on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. микрокредит on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.