Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Mapping Human-Agent Co-Learning and Co-Adaptation: A Scoping Review

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » AI framework tackles LLM agent instability
AI Assistants

AI framework tackles LLM agent instability

Advanced AI BotBy Advanced AI BotApril 24, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Researchers have introduced RAGEN, an AI framework designed to counter LLM agent instability when handling complex situations.

Training these AI agents presents significant hurdles, particularly when decisions span multiple steps and involve unpredictable feedback from the environment. While reinforcement learning (RL) has shown promise in static tasks like solving maths problems or generating code, its application to dynamic, multi-turn agent training has been less explored.   

Addressing this gap, a collaborative team from institutions including Northwestern University, Stanford University, Microsoft, and New York University has proposed StarPO (State-Thinking-Actions-Reward Policy Optimisation).

StarPO offers a generalised approach for training agents at the trajectory level (i.e. it optimises the entire sequence of interactions, not just individual actions.)

Accompanying this is RAGEN, a modular system built to implement StarPO. This enables the training and evaluation of LLM agents, particularly focusing on their reasoning capabilities under RL. RAGEN provides the necessary infrastructure for rollouts, reward assignment, and optimisation within multi-turn, stochastic (randomly determined) environments.

Minimalist environments, maximum insight

To isolate the core learning challenges from confounding factors like extensive pre-existing knowledge or task-specific engineering, the researchers tested LLMs using RAGEN in three deliberately minimalistic, controllable symbolic gaming environments:   

Bandit: A single-turn, stochastic task testing risk-sensitive symbolic reasoning. The agent chooses between options (like ‘Phoenix’ or ‘Dragon’ arms) with different, initially unknown, reward profiles.

Sokoban: A multi-turn, deterministic puzzle requiring foresight and planning, as actions (pushing boxes) are irreversible.

Frozen Lake: A multi-turn, stochastic grid navigation task where movement attempts can randomly fail, demanding planning under uncertainty.

These environments allow for clear analysis of how agents learn decision-making policies purely through interaction.   

Key findings: Stability, rollouts, and reasoning

The study yielded three significant findings concerning the training of self-evolving LLM agents:

The ‘Echo Trap’ and the need for stability

A recurring problem observed during multi-turn RL training was dubbed the “Echo Trap”. Agents would initially improve but then suffer performance collapse, overfitting to locally rewarded reasoning patterns. 

This was marked by collapsing reward variance, falling entropy (a measure of randomness/exploration), and sudden spikes in gradients (indicating training instability). Early signs included drops in reward standard deviation and output entropy.   

To combat this, the team developed StarPO-S, a stabilised version of the framework. StarPO-S incorporates:   

Variance-based trajectory filtering: Focusing training on task instances where the agent’s behaviour shows higher uncertainty (higher reward variance), discarding low-variance, less informative rollouts. This improved stability and efficiency.   

Critic incorporation: Using methods like PPO (Proximal Policy Optimisation), which employ a ‘critic’ to estimate value, generally showed better stability than critic-free methods like GRPO (Group Relative Policy Optimisation) in most tests.   

Decoupled clipping and KL removal: Techniques adapted from other research (DAPO) involving asymmetric clipping (allowing more aggressive learning from positive rewards) and removing KL divergence penalties (encouraging exploration) further boosted stability and performance.   

StarPO-S consistently delayed collapse and improved final task performance compared to vanilla StarPO.   

Rollout quality is crucial

The characteristics of the ‘rollouts’ (simulated interaction trajectories used for training) significantly impact learning. Key factors identified include:   

Task diversity: Training with a diverse set of initial states (prompts), but with multiple responses generated per prompt, aids generalisation. The sweet spot seemed to be moderate diversity enabling contrast between different outcomes in similar scenarios.   

Interaction granularity: Allowing multiple actions per turn (around 5-6 proved optimal) enables better planning within a fixed turn limit, without introducing the noise associated with excessively long action sequences.   

Rollout frequency: Using fresh, up-to-date rollouts that reflect the agent’s current policy is vital. More frequent sampling (approaching an ‘online’ setting) leads to faster convergence and better generalisation by reducing policy-data mismatch.

Maintaining freshness, alongside appropriate action budgets and task diversity, is key for stable training.   

Reasoning requires careful reward design

Simply prompting models to ‘think’ doesn’t guarantee meaningful reasoning emerges, especially in multi-turn tasks. The study found:

Reasoning traces helped generalisation in the simpler, single-turn Bandit task, even when symbolic cues conflicted with rewards.   

In multi-turn tasks like Sokoban, reasoning benefits were limited, and the length of ‘thinking’ segments consistently declined during training. Agents often regressed to direct action selection or produced “hallucinated reasoning” if rewards only tracked task success, revealing a “mismatch between thoughts and environment states.”

This suggests that standard trajectory-level rewards (often sparse and outcome-based) are insufficient. 

“Without fine-grained, reasoning-aware reward signals, agent reasoning hardly emerge[s] through multi-turn RL.”

The researchers propose that future work should explore rewards that explicitly evaluate the quality of intermediate reasoning steps, perhaps using format-based penalties or rewarding explanation quality, rather than just final outcomes.   

RAGEN and StarPO: A step towards self-evolving AI

The RAGEN system and StarPO framework represent a step towards training LLM agents that can reason and adapt through interaction in complex, unpredictable environments.

This research highlights the unique stability challenges posed by multi-turn RL and offers concrete strategies – like StarPO-S’s filtering and stabilisation techniques – to mitigate them. It also underscores the critical role of rollout generation strategies and the need for more sophisticated reward mechanisms to cultivate genuine reasoning, rather than superficial strategies or hallucinations.

While acknowledging limitations – including the need to test on larger models and optimise for domains without easily verifiable rewards – the work opens “a scalable and principled path for building AI systems” in areas demanding complex interaction and verifiable outcomes, such as theorem proving, software engineering, and scientific discovery.

(Image by Gerd Altmann)

See also: How does AI judge? Anthropic studies the values of Claude

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleStudy: AI-Powered Research Prowess Now Outstrips Human Experts, Raising Bioweapon Risks
Next Article Roman Goddess on Virginia Flag Too Risqué for Texas Curriculum
Advanced AI Bot
  • Website

Related Posts

Mistral AI challenges big tech with reasoning model

June 10, 2025

Anthropic launches Claude AI models for US national security

June 6, 2025

Reddit sues Anthropic over AI data scraping

June 5, 2025
Leave A Reply Cancel Reply

Latest Posts

Smithsonian Stands Up to Trump

France Hikes Museum Ticket Prices for Non-European Tourists

Campaigners Call for Schools to Boycott Science Museum Over Funding

Highlights Of Arts And Music Festivals

Latest Posts

Mapping Human-Agent Co-Learning and Co-Adaptation: A Scoping Review

June 10, 2025

Stanford HAI’s 2025 AI Index Reveals Record Growth in AI Capabilities, Investment, and Regulation

June 10, 2025

New MIT CSAIL study suggests that AI won’t steal as many jobs as expected

June 10, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.