Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

China’s new GLM AI offers lower costs than DeepSeek, says maker Z.ai

Alibaba Unveils Intelligent Cockpits, Enterprise Partnerships And AI Glasses At WAIC 2025

Malaysia, Nvidia to launch green AI infrastructure

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Researchers warn of ‘catastrophic overtraining’ in LLMs

By Advanced AI EditorMarch 30, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

A new academic study challenges a core assumption in developing large language models (LLMs), warning that more pre-training data may not always lead to better models.

Researchers from some of the leading computer science institutions in the West and around the world—including Carnegie Mellon University, Stanford University, Harvard University and Princeton University—have introduced the concept of “Catastrophic Overtraining. ” They show that extended pre-training can actually make language models harder to fine-tune, ultimately degrading their performance.

The study, “Overtrained Language Models Are Harder to Fine-Tune,” is available on arXiv and led by Jacob Mitchell Springer. Its co-authors are Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig and Aditi Raghunathan.

The law of diminishing returns

The research focuses on a surprising trend observed in modern LLM development: while models are pre-trained on ever-expanding pools of data—licensed or scraped from the web, represented to an LLM as a series of tokens or numerical representations of concepts and ideas—increasing the token number during pre-training may lead to reduced effectiveness when those models are later fine-tuned for specific tasks.

The team conducted a series of empirical evaluations and theoretical analyses to examine the effect of extended pre-training on model adaptability.

One of the key findings centers on AI2’s open source OLMo-1B model.

The researchers compared two versions of this model: one pre-trained on 2.3 trillion tokens and another on 3 trillion tokens.

Despite the latter being trained on 30% more data, the latter model performed worse after instruction tuning. Specifically, the 3T-token model showed over 2% worse performance on several standard language model benchmarks compared to its 2.3T-token counterpart. In some evaluations, the degradation in performance reached up to 3%.

The researchers argue that this decline is not an anomaly but rather a consistent phenomenon they term “Catastrophic Overtraining.”

Understanding sensitivity and forgetting

The paper attributes this degradation to a systematic increase in what they call “progressive sensitivity.” As models undergo extended pre-training, their parameters become more sensitive to changes.

This increased fragility makes them more vulnerable to degradation during post-training modifications such as instruction tuning, fine-tuning for multimodal tasks, or even simple weight perturbations.

The researchers provide evidence that, beyond a certain point in pre-training, any modification—whether structured like fine-tuning or unstructured like adding Gaussian noise—leads to a greater loss of previously learned capabilities.

This sensitivity results in “forgetting,” where the model’s original strengths deteriorate as new training data is introduced.

The study identifies an “inflection point” in pre-training, after which additional training leads to diminishing and even negative returns regarding fine-tuning outcomes. For the OLMo-1B model, this threshold emerged around 2.5 trillion tokens.

A wealth of evidence

The team’s analysis spans real-world and controlled experimental settings. They tested the phenomenon across different tasks, including instruction tuning using datasets like Anthropic-HH and TULU and multimodal fine-tuning using the LLaVA framework.

The results consistently showed that models pre-trained beyond certain token budgets underperformed after fine-tuning.

Furthermore, the researchers constructed a theoretical model using linear networks to understand better why overtraining leads to increased sensitivity.

Their analysis confirmed that progressive sensitivity and catastrophic overtraining are mathematically inevitable when pre-training continues indefinitely without proper constraints.

The ultimate takeaway? Model providers and trainers must make trade-offs

The findings challenge the widespread assumption that more pre-training data is always better. Instead, the paper suggests a nuanced trade-off: while longer pre-training improves the base model’s capabilities, it also increases the risk that fine-tuning will degrade those capabilities.

In practice, attempts to mitigate this effect—such as adjusting fine-tuning learning rates or adding regularization—may delay the onset of catastrophic overtraining but cannot fully eliminate it without sacrificing downstream performance.

Thus, for enterprises looking to leverage LLMs to improve business workflows and outcomes, if one idea for doing so is to fine-tune an open-source model, the lesson from this research indicates that fine-tuning lower parameter models trained on less material is likely to arrive at a more reliable production model.

The authors acknowledge that further research is needed to understand the factors influencing when and how catastrophic overtraining occurs. Open questions include whether the pre-training optimizer, training objective, or data distribution can impact the severity of the phenomenon.

Implications for future LLM and AI model development

The study significantly impacts how organizations and researchers design and train large language models. As the field continues to pursue larger and more capable models, this research highlights the importance of balancing pre-training duration with post-training adaptability.

Additionally, the findings may influence how model developers think about resource allocation. Rather than focusing exclusively on increasing pre-training budgets, developers may need to reassess strategies to optimize downstream performance without incurring the negative effects of catastrophic overtraining.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI is aiming for AGI but landing on Studio Ghibli
Next Article CoreWeave Shrinks IPO
Advanced AI Editor
  • Website

Related Posts

Sparrow raises $35M Series B to automate the employee leave management nightmare

July 29, 2025

Stack Overflow data reveals the hidden productivity tax of ‘almost right’ AI code

July 29, 2025

Writer launches a ‘super agent’ that actually gets sh*t done, outperforms OpenAI on key benchmarks

July 29, 2025
Leave A Reply

Latest Posts

John Roberts Prevented Firing of National Portrait Gallery Director

Betye Saar Assembles an All-Star Group to Steward Her Legacy

Picasso’s ‘Demoiselles’ May Not Have Been Inspired by African Art

Catalan National Assembly protested the restitution of murals to Aragon.

Latest Posts

China’s new GLM AI offers lower costs than DeepSeek, says maker Z.ai

July 29, 2025

Alibaba Unveils Intelligent Cockpits, Enterprise Partnerships And AI Glasses At WAIC 2025

July 29, 2025

Malaysia, Nvidia to launch green AI infrastructure

July 29, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • China’s new GLM AI offers lower costs than DeepSeek, says maker Z.ai
  • Alibaba Unveils Intelligent Cockpits, Enterprise Partnerships And AI Glasses At WAIC 2025
  • Malaysia, Nvidia to launch green AI infrastructure
  • Google DeepMind Launches Gemini Model to Transform Robotics Future
  • OpenAI is launching a version of ChatGPT for college students

Recent Comments

  1. binance on OpenAI updates its new Responses API rapidly with MCP support, GPT-4o native image gen, and more enterprise features
  2. binance kód on Anthropic closes $2.5 billion credit facility as Wall Street continues plunging money into AI boom – NBC Los Angeles
  3. 🖨 🔵 Incoming Message: 1.95 Bitcoin from exchange. Claim transfer => https://graph.org/ACTIVATE-BTC-TRANSFER-07-23?hs=40f06aae45d2dc14b01045540f836756& 🖨 on SFC Dialogue丨Jeffrey Sachs says he uses DeepSeek every hour_to_facts_its
  4. 📪 ✉️ Unread Notification: 1.65 BTC from user. Claim transfer >> https://graph.org/ACTIVATE-BTC-TRANSFER-07-23?hs=63f0a8159ef8316c31f5a9a8aca50f39& 📪 on Sean Carroll: Arrow of Time
  5. 🔋 📬 Unread Alert - 1.65 BTC from exchange. Accept funds > https://graph.org/ACTIVATE-BTC-TRANSFER-07-23?hs=db3ef91843302da628b83636ef7db949& 🔋 on Rohit Prasad: Amazon Alexa and Conversational AI | Lex Fridman Podcast #57

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.