Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

$750 Target Stays as Analysts Expect AI Gaps to Close

A.I. May Be the Future, but First It Has to Study Ancient Roman History

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Andrej Karpathy

Karpathy Proposes New AI Training Method Inspired by Claude’s 17000-Word System Prompt

By Advanced AI EditorMay 12, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Andrej Karpathy, a leading voice in AI development and former director of AI at Tesla, recently sparked debate with a deceptively simple idea: maybe we’ve been missing an entire paradigm in how large language models (LLMs) learn. His proposal, “System Prompt Learning,” doesn’t involve more data or deeper networks—but rather, a smarter way to guide models using editable instructions that resemble human memory and reasoning.

Andrej Karpathy presenting on stage, known for his work at Tesla and OpenAI. (ytimg.com)
Andrej Karpathy presenting on stage, known for his work at Tesla and OpenAI. (ytimg.com)

In a world where AI investment hinges on breakthroughs that push beyond brute-force pretraining and expensive fine-tuning, this idea—drawn from the mechanics behind Claude’s 17,000-word system prompt—raises critical questions about how we scale AI more efficiently and responsibly.

Pretraining, Fine-Tuning… and Then What?

The current AI training stack is dominated by two heavyweight strategies:

Pretraining: LLMs ingest massive amounts of text to develop a general understanding of language and the world.
Fine-tuning: Specific behaviors are reinforced through supervised examples or reinforcement learning, often aligned with human feedback (RLHF).

Reinforcement Learning from Human Feedback (RLHF) is a multi-stage process used to train AI models, particularly large language models, to better align with human preferences. It involves using human feedback, often by ranking different model outputs, to create a reward model that subsequently guides the AI’s learning through reinforcement learning.

Both approaches alter the model’s internal parameters. But Karpathy points out a human learning trait that these methods overlook: we often don’t “rewire” our brains when learning. We take notes. We leave ourselves explicit reminders. We adapt by changing our internal instructions, not our core wiring.

System Prompt Learning borrows from this principle. Instead of editing weights with gradients, it suggests editing the model’s system prompt—a persistent set of instructions that shape its behavior across tasks. In this framework, LLMs could, in theory, write, refine, and update their own problem-solving strategies—like keeping a personal notebook.

Claude’s 17,000-Word Manual: The Spark Behind the Shift

Karpathy’s proposal wasn’t theoretical. It was triggered by a real-world example: Anthropic’s Claude model, whose system prompt spans nearly 17,000 words. This mega-prompt encodes everything from moral boundaries (e.g. avoid copyrighted song lyrics) to detailed strategies for answering questions (e.g. how to count letters in a word like strawberry). You can view the full Claude system prompt here.

Table 1: Claude’s System Prompt Characteristics and Components

CharacteristicDetailsSize~16,739 words (110kb)Token LengthReportedly around 24,000 tokensComparisonMuch larger than OpenAI’s o4-mini (2,218 words, 15.1kb)Key ComponentsCurrent InformationProvides date and contextual information at conversation startBehavioral GuidelinesInstructions for response formatting and interaction styleRole DefinitionEstablishes Claude’s identity and operational parametersTool DefinitionsLargest component; instructions for tool usage from MCP serversSafety ParametersGuidance for handling potentially harmful requestsTechnical InstructionsGuidelines for counting words/characters and formattingPurposeServes as “settings” for how the LLM interacts with usersDevelopmentPeriodically updated based on user feedback and design improvements

Rather than hardcoding knowledge into weights—which can be inefficient, inflexible, and costly—Anthropic appears to be using the system prompt as a dynamic instruction set. According to Karpathy, this resembles how humans adjust: by explicitly stating “when X happens, try Y approach.”

This shift reframes system prompts from static behavior guides to living documents—a place where LLMs could store generalized strategies and revise them over time. In effect, it’s a proposal to make AI not just smarter, but more teachable.

Why This Matters for Investors and Builders

The appeal of System Prompt Learning isn’t just academic. It speaks directly to key pain points in current AI deployment:

1. Lower Operational Costs

Fine-tuning a model—especially with RLHF—is expensive and slow. Updating a system prompt, however, is nearly free and instantaneous. If core behaviors can be changed by updating instructions instead of retraining weights, deployment becomes faster and cheaper.

AI Model Update Methods: Fine-tuning/RLHF vs. System Prompt Editing

MethodCost & EffortTime to ImplementKey TraitsFine-tuning / RLHFHigh: Needs compute, data, and ML expertiseLong (days–weeks)Updates model weights for task/domain accuracy; less flexible post-trainingPrompt EditingLow: Mostly prompt design/testingShort (hours–days)Adjusts behavior via instructions; fast, flexible, no retraining neededGeneral NotesCost depends on model size, tokens, and infraMaintenance ongoingChoice depends on goals, resources, and required performance; can be combined
2. More Agile AI Products

Startups building domain-specific agents (legal bots, medical assistants, customer service tools) need quick iteration. System prompts allow rapid changes without retraining the model, increasing adaptability in production environments.

3. Data Efficiency and Feedback Loops

Traditional fine-tuning requires large datasets. System prompt learning offers a higher-dimensional feedback channel. Instead of optimizing for a scalar reward, it invites richer, textual feedback—closer to how humans give instructions.

What the Experts Are Saying

The idea has drawn mixed reactions across AI circles:

Proponents liken system prompts to a Written Torah—defining base instructions—while new cases adapt and expand through interactive learning, similar to an Oral Torah.
Critics worry about scaling and complexity. As prompts grow, they risk becoming brittle, inconsistent, or contradictory. This could undermine reliability in high-stakes applications.
Some advocate for a hybrid approach: periodic “distillation” of system prompt knowledge into weights, allowing AI to move from explicit to habitual knowledge over time—just as humans do.
Others experiment with memory hierarchies, where models index problem-solving examples and pull them into the prompt context only when needed—combining this with Retrieval-Augmented Generation (RAG) and planning tools.

Retrieval-Augmented Generation (RAG) is an AI architecture designed to improve the answers generated by Large Language Models (LLMs). It works by first retrieving relevant information from external knowledge sources and then feeding this context to the LLM to produce more accurate, relevant, and up-to-date responses.

Despite its promise, some see system prompt learning not as a paradigm shift, but as an incremental evolution. Still, when companies like Anthropic, OpenAI, and Google differ drastically in their system prompt sizes (Claude’s 16,739 words vs. OpenAI’s ~2,218), it’s clear the prompt is becoming a new frontier.

Where This Could Go Next

If LLMs could autonomously write and update their own system prompts—documenting lessons learned, strategies tested, and tasks refined—we may witness the birth of a new AI training architecture:

Self-refining agents that evolve in production by revising their own manuals
Task-specialized models that don’t require extensive retraining for new domains
Semi-automated distillation, where prompt-based knowledge is selectively moved into long-term weights, improving performance without loss of flexibility

This could align well with enterprise needs: models that are interpretable, traceable, and incrementally trainable—with minimal downtime.

A Notebook for Machines

Karpathy’s idea may sound abstract, but it taps into a deep intuition: intelligence isn’t just about what we know—it’s about how we structure that knowledge for use. System Prompt Learning suggests LLMs don’t just need bigger brains—they need better notebooks.

As more AI companies explore this middle ground between pretraining and fine-tuning, expect prompt engineering to evolve into prompt architecture—a discipline of its own. Whether this becomes the next paradigm or a powerful auxiliary remains to be seen.

But one thing is clear: in the race to build smarter, cheaper, and more controllable AI, teaching models how to learn may soon matter more than what they know.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleStudy: AI-Powered Research Prowess Now Outstrips Human Experts, Raising Bioweapon Risks
Next Article Foundation AI: Cisco launches AI model for integration in security applications
Advanced AI Editor
  • Website

Related Posts

Elon Musk Wants Former Tesla AI Chief Back

July 23, 2025

LLM-Optimized Research Paper Formats: AI-Driven Research App Opportunities Explored | AI News Detail

July 11, 2025

How to Build a Thriving Open Source AI Community Using Modular Bacterial-Inspired Code Principles | AI News Detail

July 7, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

$750 Target Stays as Analysts Expect AI Gaps to Close

July 27, 2025

A.I. May Be the Future, but First It Has to Study Ancient Roman History

July 27, 2025

OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News

July 27, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • $750 Target Stays as Analysts Expect AI Gaps to Close
  • A.I. May Be the Future, but First It Has to Study Ancient Roman History
  • OpenAI CEO Sam Altman issues big warning for ChatGPT users: Here are all the details – Technology News
  • This Indian With IIT, MIT Degree Could Have Received Rs 800 Crore Joining Bonus Ast Meta! – Trak.in
  • Beijing Is Using Soft Power to Gain Global Dominance

Recent Comments

  1. Rejestracja on Online Education – How I Make My Videos
  2. Anonymous on AI, CEOs, and the Wild West of Streaming
  3. MichaelWinty on Local gov’t reps say they look forward to working with Thomas
  4. 4rabet mirror on Former Tesla AI czar Andrej Karpathy coins ‘vibe coding’: Here’s what it means
  5. Janine Bethel on OpenAI research reveals that simply teaching AI a little ‘misinformation’ can turn it into an entirely unethical ‘out-of-the-way AI’

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.