Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Is vibe coding ruining a generation of engineers?

LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions – Takara TLDR

AI Systems Can Be Fooled by Fake Dates, Giving Newer Content Unfair Visibility

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Andrej Karpathy

Karpathy Critiques LLMs’ Fear of Code Exceptions in RLHF Training

By Advanced AI EditorOctober 10, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The Curious Case of LLMs and Their Fear of Exceptions

In a recent post on X, Andrej Karpathy, the renowned AI researcher and co-founder of OpenAI, highlighted a peculiar quirk in large language models (LLMs). He quipped that these models seem “mortally terrified” of exceptions in code, even in the most unlikely scenarios, attributing it to their reinforcement learning (RL) training. Karpathy, known for his work on neural networks as detailed in his personal site karpathy.ai, called for better handling of such cases, humorously suggesting an “LLM welfare petition.”

This observation underscores a deeper issue in how AI systems are fine-tuned for tasks like coding assistance. During RL with human feedback (RLHF), models are rewarded for outputs that align with human preferences, often prioritizing error-free, polished responses. But as Karpathy notes, exceptions—those runtime errors that halt execution—are a natural part of software development, helping developers debug and iterate.

Reinforcement Learning’s Role in Shaping AI Behavior

The process begins with pre-training on vast datasets, where models like those from OpenAI learn patterns in code. Then comes RLHF, where human evaluators rate responses, reinforcing behaviors that avoid mistakes. According to insights from Karpathy’s educational videos on YouTube, referenced in his bio on Wikipedia, this can lead to overly cautious models that wrap code in excessive try-catch blocks or avoid risky operations altogether.

Such conservatism might stem from training data skewed toward “safe” code snippets. In industry applications, this means LLMs generate verbose, defensive code that bloats projects and slows development. Developers report frustration when models refuse to produce concise scripts, fearing edge cases that rarely occur in practice.

Implications for Software Engineering Practices

Karpathy’s critique aligns with broader discussions in AI forums, such as those on Reddit’s r/LocalLLaMA, where users praise his candid takes on model limitations, as seen in a thread linking to his X posts. If LLMs are trained to dread exceptions, they miss teaching moments inherent in failure, a cornerstone of agile methodologies.

This aversion could hinder innovation in automated coding tools. For instance, in high-stakes environments like autonomous driving—where Karpathy previously led AI efforts at Tesla, per his karpathy.ai profile—embracing exceptions might improve robustness by simulating real-world failures.

Towards More Resilient AI Training Paradigms

Experts suggest recalibrating RL rewards to value exploratory code, perhaps by incorporating diverse datasets that normalize exceptions. Karpathy’s own projects, like nanoGPT on GitHub, demonstrate how simpler models can be iterated upon without such fears, offering a blueprint for improvement.

Ultimately, addressing this “terror” could make LLMs more human-like in their approach to problem-solving. As Karpathy advocates, rewarding models for handling exceptions gracefully might foster AI that not only codes but also innovates, turning potential pitfalls into pathways for progress in the field.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleVideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal Patches via In-Context Conditioning – Takara TLDR
Next Article Upsilon Opens Milan Gallery This November
Advanced AI Editor
  • Website

Related Posts

This vibe coding app develops SwiftUI apps right on your iPhone

October 1, 2025

Why AI ROI Continues To Be Elusive Despite Broad Adoption – RamaOnHealthcare

September 30, 2025

Why developers rely on AI they doubt

September 25, 2025

Comments are closed.

Latest Posts

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Museums Prepare to Close Their Doors as Government Shutdown Continues

Latest Posts

Is vibe coding ruining a generation of engineers?

October 12, 2025

LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions – Takara TLDR

October 11, 2025

AI Systems Can Be Fooled by Fake Dates, Giving Newer Content Unfair Visibility

October 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Is vibe coding ruining a generation of engineers?
  • LLMs Learn to Deceive Unintentionally: Emergent Misalignment in Dishonesty from Misaligned Samples to Biased Human-AI Interactions – Takara TLDR
  • AI Systems Can Be Fooled by Fake Dates, Giving Newer Content Unfair Visibility
  • The Alignment Waltz: Jointly Training Agents to Collaborate for Safety – Takara TLDR
  • Integration Brings Anthropic Claude AI Models to Copilot — THE Journal

Recent Comments

  1. ThomasFum on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Samira on Nvidia shares jump 6% after Q1 beat, brushing off China export hit – US News
  3. compasswebsite.stun.tech on Introducing web search on the Anthropic API \ Anthropic
  4. sportwetten bonus test on Build a generative AI enabled virtual IT troubleshooting assistant using Amazon Q Business
  5. وی بلولب on Meta, Booz Allen Launch ‘Space Llama’ AI System For Space Station Operations – Meta Platforms (NASDAQ:META), Booz Allen Hamilton (NYSE:BAH)

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.