Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

European Commission & AI: Guidelines on Prohibited Practices | Paul Hastings LLP

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

Foundation AI: Cisco launches AI model for integration in security applications

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Researchers warn of ‘catastrophic overtraining’ in LLMs
VentureBeat AI

Researchers warn of ‘catastrophic overtraining’ in LLMs

Advanced AI BotBy Advanced AI BotMarch 30, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

A new academic study challenges a core assumption in developing large language models (LLMs), warning that more pre-training data may not always lead to better models.

Researchers from some of the leading computer science institutions in the West and around the world—including Carnegie Mellon University, Stanford University, Harvard University and Princeton University—have introduced the concept of “Catastrophic Overtraining. ” They show that extended pre-training can actually make language models harder to fine-tune, ultimately degrading their performance.

The study, “Overtrained Language Models Are Harder to Fine-Tune,” is available on arXiv and led by Jacob Mitchell Springer. Its co-authors are Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig and Aditi Raghunathan.

The law of diminishing returns

The research focuses on a surprising trend observed in modern LLM development: while models are pre-trained on ever-expanding pools of data—licensed or scraped from the web, represented to an LLM as a series of tokens or numerical representations of concepts and ideas—increasing the token number during pre-training may lead to reduced effectiveness when those models are later fine-tuned for specific tasks.

The team conducted a series of empirical evaluations and theoretical analyses to examine the effect of extended pre-training on model adaptability.

One of the key findings centers on AI2’s open source OLMo-1B model.

The researchers compared two versions of this model: one pre-trained on 2.3 trillion tokens and another on 3 trillion tokens.

Despite the latter being trained on 30% more data, the latter model performed worse after instruction tuning. Specifically, the 3T-token model showed over 2% worse performance on several standard language model benchmarks compared to its 2.3T-token counterpart. In some evaluations, the degradation in performance reached up to 3%.

The researchers argue that this decline is not an anomaly but rather a consistent phenomenon they term “Catastrophic Overtraining.”

Understanding sensitivity and forgetting

The paper attributes this degradation to a systematic increase in what they call “progressive sensitivity.” As models undergo extended pre-training, their parameters become more sensitive to changes.

This increased fragility makes them more vulnerable to degradation during post-training modifications such as instruction tuning, fine-tuning for multimodal tasks, or even simple weight perturbations.

The researchers provide evidence that, beyond a certain point in pre-training, any modification—whether structured like fine-tuning or unstructured like adding Gaussian noise—leads to a greater loss of previously learned capabilities.

This sensitivity results in “forgetting,” where the model’s original strengths deteriorate as new training data is introduced.

The study identifies an “inflection point” in pre-training, after which additional training leads to diminishing and even negative returns regarding fine-tuning outcomes. For the OLMo-1B model, this threshold emerged around 2.5 trillion tokens.

A wealth of evidence

The team’s analysis spans real-world and controlled experimental settings. They tested the phenomenon across different tasks, including instruction tuning using datasets like Anthropic-HH and TULU and multimodal fine-tuning using the LLaVA framework.

The results consistently showed that models pre-trained beyond certain token budgets underperformed after fine-tuning.

Furthermore, the researchers constructed a theoretical model using linear networks to understand better why overtraining leads to increased sensitivity.

Their analysis confirmed that progressive sensitivity and catastrophic overtraining are mathematically inevitable when pre-training continues indefinitely without proper constraints.

The ultimate takeaway? Model providers and trainers must make trade-offs

The findings challenge the widespread assumption that more pre-training data is always better. Instead, the paper suggests a nuanced trade-off: while longer pre-training improves the base model’s capabilities, it also increases the risk that fine-tuning will degrade those capabilities.

In practice, attempts to mitigate this effect—such as adjusting fine-tuning learning rates or adding regularization—may delay the onset of catastrophic overtraining but cannot fully eliminate it without sacrificing downstream performance.

Thus, for enterprises looking to leverage LLMs to improve business workflows and outcomes, if one idea for doing so is to fine-tune an open-source model, the lesson from this research indicates that fine-tuning lower parameter models trained on less material is likely to arrive at a more reliable production model.

The authors acknowledge that further research is needed to understand the factors influencing when and how catastrophic overtraining occurs. Open questions include whether the pre-training optimizer, training objective, or data distribution can impact the severity of the phenomenon.

Implications for future LLM and AI model development

The study significantly impacts how organizations and researchers design and train large language models. As the field continues to pursue larger and more capable models, this research highlights the importance of balancing pre-training duration with post-training adaptability.

Additionally, the findings may influence how model developers think about resource allocation. Rather than focusing exclusively on increasing pre-training budgets, developers may need to reassess strategies to optimize downstream performance without incurring the negative effects of catastrophic overtraining.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleOpenAI is aiming for AGI but landing on Studio Ghibli
Next Article CoreWeave Shrinks IPO
Advanced AI Bot
  • Website

Related Posts

Agent-based computing is outgrowing the web as we know it

June 7, 2025

Sam Altman calls for ‘AI privilege’ as OpenAI clarifies court order to retain temporary and deleted ChatGPT sessions

June 6, 2025

Voice AI that actually converts: New TTS model boosts sales 15% for major brands

June 6, 2025
Leave A Reply Cancel Reply

Latest Posts

The Timeless Willie Nelson On Positive Thinking

Jiaxing Train Station By Architect Ma Yansong Is A Model Of People-Centric, Green Urban Design

Midwestern Grotto Tradition Celebrated In Sheboygan, WI

Hugh Jackman And Sonia Friedman Boldly Bid To Democratize Theater

Latest Posts

European Commission & AI: Guidelines on Prohibited Practices | Paul Hastings LLP

June 8, 2025

United States, China, and United Kingdom Lead the Global AI Ranking According to Stanford HAI’s Global AI Vibrancy Tool

June 8, 2025

Foundation AI: Cisco launches AI model for integration in security applications

June 8, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.