Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

AI models know when they’re being tested – and change their behavior, research shows

Irregular raises $80 million to secure frontier AI models

My thoughts on Charlie Kirk… you need to wake up.

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
DeepSeek

How the DeepSeek-R1 AI model was taught to teach itself to reason | Explained

By Advanced AI EditorSeptember 17, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


The story so far: For many decades, one of the great challenges in artificial intelligence (AI) has been teaching machines to reason. Reasoning goes beyond memorising facts or completing sentences. It’s the ability to follow steps, reflect on mistakes, and adjust strategies until the right answer is found.

Humans use reasoning for everything from solving maths problems to writing computer programmes, from negotiating their daily lives to deciding whom to vote for. Large language models (LLMs) such as GPT-4 or DeepSeek-V3 have surprised scientists by showing signs of reasoning when scaled to large sizes. Another method, called chain-of-thought prompting, where the model is nudged to “think step by step”, has also boosted performance.

But both these approaches come with limits. Training models to reason usually demand human-made examples. E.g. people show an AI model how to solve problems and the AI learns to copy the method. This is slow, expensive, and introduces human biases. It also caps the AI’s creativity because the model can’t explore problem-solving methods that humans didn’t think of.

In a paper published in Nature on September 17, the DeepSeek-AI team reported that it was able to reach its model, called just R1, to reason by asking an ambitious question: what if we allowed the model to teach itself to reason without showing it human examples first? That is, they found that R1 could develop new forms of reasoning using reinforcement learning, a method of trial and error guided only by rewards for correct answers.

What is reinforcement learning?

The team’s aim was to make the model smarter at maths and coding as well as to uncover how reasoning behaviours might emerge naturally when a machine is given the proper incentives.

DeepSeek researchers began with V3 Base, a large language model similar to other state-of-the-art systems. Instead of using the usual supervised fine-tuning, where humans provide the reasoning steps, they applied ‘group relative policy optimisation’, a reinforcement learning method designed for efficiency.

In this setup, the model, called R1-Zero at first, was asked to solve mathematical and algorithmic problems. For each attempt, it had to produce two parts: a reasoning process inside `…` tags and a final answer inside `…` tags. The only reward came from whether the final answer was correct, judged by rule-based systems like answer keys or code compilers. No one told the model how its reasoning should look.

Over thousands of training steps, the model learned by trial and error. If an answer was wrong, the path that led there was discouraged; if it was right, the path was reinforced. Importantly, the researchers also tracked how the model’s thinking time, i.e. the number of tokens it used in its reasoning section, changed. Strikingly, the model began writing longer and more reflective reasoning chains on its own, sometimes including phrases like “wait” or “let’s try again”, revealing an ability to self-correct.

Was there human intervention?

To address weaknesses such as poor readability and mixing English with Chinese, the team built R1 from R1-Zero. This process included adding incentives for consistently using one language supervised fine-tuning with both reasoning and non-reasoning data. The final model thus inherited the raw reasoning power of R1-Zero while also becoming easier to use and safer.

The results were striking. On the American Invitational Mathematics Examination (AIME) 2024, a tough competition that usually the smartest high-school students attempt, R1-Zero’s accuracy jumped from just 15.6% at the start of training to 77.9% by the end. With more tuning, it reached 86.7%, surpassing the average performance of human students.

At a certain stage, R1-Zero began using the word “wait” more often in its reasoning, just like a human might have when a mistake is spotted. The researchers said this meant the model wasn’t blindly following a path but actively rethinking steps when something seemed off. In effect, reinforcement learning had coaxed the AI into behaviours that resembled reflection and verification, both elements of reasoning.

The ultimate R1 model was even stronger: it was good at maths and coding as well as on benchmarks for general knowledge, answering questions, and following instructions. Compared to its predecessors, R1 was also more consistent with its choice of language and better aligned with human preferences for helpfulness and safety. When evaluated with frameworks like AlpacaEval 2.0 and Arena-Hard, which test how well a model follows instructions, R1 improved by 25% and 17%, respectively, which are considered large.

What’re the pros and cons of reasoning?

Many large language models, including widely used systems like ChatGPT, often demand large amounts of computational resources during testing. R1, on the other hand, could adapt how much it “thought” depending on the task’s difficulty. Simple problems were met with short reasoning chains while harder ones led to longer, more elaborate chains. This dynamic allocation avoided demanding power on questions that didn’t warrant it. However, reinforcement learning itself is energy-intensive.

Taken together, the findings confirm that reinforcement learning alone (with the right design) could produce reasoning behaviours that were previously thought to require human examples. This could change the way we think about how intelligence might grow in artificial systems. For instance, in future, researchers could build verifiers that check answers and let the model figure out its own strategies. If the answer to a maths problem, a computer programme or a factual question can be reliably checked, then reinforcement learning can do the rest. This could speed up progress while reducing human labour and bias.

Indeed, traditional LLM training pipelines bank heavily on large human-labelled datasets — people writing question-answer pairs, reasoning steps, preference judgments, etc. They are expensive and often assembled under exploitative labour conditions. If machines can be taught to reason using reinforcement learning alone, the demand for human-annotated data can shrink, thus also reducing pressure to source cheap labour worldwide. However, the study paper also acknowledges that tasks without clear ground-truthing still rely on human-labelled data for reward models. So human input is not eliminated; only its scope may shrink to areas where no reliable verifier can be built.

A model that learns to reason will also demand better reward signals for open-ended tasks like writing, which is difficult, as well as stronger safeguards as it becomes capable of generating dangerous or manipulative content. In fact, watching a machine develop reflective behaviour (pausing, checking, revising, etc.) raises questions about how far such systems can go. If reasoning emerges from incentives rather than instructions, could creativity or deeper forms of understanding emerge in the same way?

Time will tell — unless DeepSeek-R1 figures it out first.

Published – September 17, 2025 08:30 pm IST



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleChina Orders Tech Giants To Halt Nvidia AI Chip Orders, Sparking Stock Dip
Next Article Supercharge your organization’s productivity with the Amazon Q Business browser extension
Advanced AI Editor
  • Website

Related Posts

DeepSeek secrets unveiled: engineers reveal science behind China’s viral AI model

September 17, 2025

Secrets of DeepSeek AI model revealed in landmark paper

September 17, 2025

DeepSeek evaluates AI models for ‘frontier risks’, source says, as China promotes safety

September 16, 2025

Comments are closed.

Latest Posts

Jennifer Packer and Marie Watt Win $250,000 Heinz Award

KAWS Named Uniqlo’s First Artist-in-Residence

Jeffrey Gibson Talks About Animals at Unveiling of New Sculptures at the Met

‘New Yorker’ Commissions High-Profile Artists for Anniversary Covers

Latest Posts

AI models know when they’re being tested – and change their behavior, research shows

September 17, 2025

Irregular raises $80 million to secure frontier AI models

September 17, 2025

My thoughts on Charlie Kirk… you need to wake up.

September 17, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • AI models know when they’re being tested – and change their behavior, research shows
  • Irregular raises $80 million to secure frontier AI models
  • My thoughts on Charlie Kirk… you need to wake up.
  • Supercharge your organization’s productivity with the Amazon Q Business browser extension
  • How the DeepSeek-R1 AI model was taught to teach itself to reason | Explained

Recent Comments

  1. Timothyglurl on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. prague-drugs-803 on Salesforce to buy Informatica in $8B deal
  3. prague-drugs-415 on Marc Raibert: Boston Dynamics and the Future of Robotics | Lex Fridman Podcast #412
  4. prague-drugs-478 on Sam & Jony introduce io
  5. prague-drugs-973 on MIT’s Xstrings facilitates 3D printing parts with embedded actuation | VoxelMatters

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.