Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

SViM3D: Stable Video Material Diffusion for Single Image 3D Generation – Takara TLDR

China issues port crackdown on all Nvidia AI chip imports, says report — enforcement teams deployed to quash smuggling and investigate data center hardware, targeting H20 and RTX 6000D shipments

MIT rejects Trump compact, first to stand up to partisan demands

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Researcher turns gpt-oss-20b into a non-reasoning base model

By Advanced AI EditorAugust 15, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

OpenAI’s new, powerful open weights AI large language model (LLM) family gpt-oss was released less than two weeks ago under a permissive Apache 2.0 license — the company’s first open weights model launch since GPT-2 in 2019 — but developers outside the company are already reshaping it.

One of the most striking examples comes from Jack Morris, a Cornell Tech PhD student, former Google Brain Resident, and current researcher at Meta, who this week unveiled gpt-oss-20b-base, his own reworked version of OpenAI’s smaller gpt-oss-20B model, which removes the “reasoning” behavior of the model and returns it to a pre-trained “base” version that offers faster, freer, more uncensored and unconstrained responses.

The model is available now on Hugging Face under a permissive MIT License, allowing it to be used for both additional research and commercial applications.

How gpt-oss-20B-base is different than OpenAI’s gpt-oss models

To understand what Morris did, it helps to know the difference between OpenAI’s release and what AI researchers call a “base model.”

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

Most LLMs offered by leading AI labs such as OpenAI, Anthropic, Google and even open source players like Meta, DeepSeek, and Alibaba’s Qwen team are “post-trained.”

This means they have gone through an additional phase where it’s exposed to curated examples of desired behavior.

For instruction tuned models, that means giving it many examples of instructions paired with ideal responses, so it learns to respond more helpfully, politely, or safely to natural language requests.

The gpt-oss models OpenAI put out on August 5 were “reasoning-optimized”: trained and fine-tuned not just to predict the next word, but to follow instructions in a safe, consistent way, often stepping through problems with structured “chain of thought” reasoning before producing a final answer.

This is a trend that goes back to OpenAI’s o1 model released almost a year ago in September 2024, but which numerous leading AI labs have now adopted — forcing the models to think longer over multiple steps and check their own work before outputting a well-reasoned response to the user.

That makes them better suited for tasks like coding, solving math problems, or answering factual questions with explanations — but also means their responses are filtered and steered away from unsafe or undesirable content.

A base model is different. It’s the raw, pretrained version of a large language model before that reasoning-specific alignment is applied. Base models simply try to predict the next chunk of text given what’s come before, with no built-in guardrails, stylistic preferences, or refusal behaviors.

They’re prized by some researchers because they can produce more varied and less constrained output, and because studying their unaligned behavior can reveal how models store knowledge and patterns from their training data.

Morris’s goal was to “reverse” OpenAI’s alignment process and restore the smaller gpt-oss-20B to something much closer to its original pretrained state.

“We basically reversed the alignment part of LLM training, so we have something that produces natural-looking text again,” he wrote in an X thread announcing the project. “It doesn’t engage in CoT anymore. It is back to a model that just predicts the next token on generic text.”

OpenAI hasn’t open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only…

or is it?

turns out that underneath the surface, there is still a strong base model. so we extracted it.

introducing gpt-oss-20b-base ? pic.twitter.com/3xryQgLF8Z

— jack morris (@jxmnop) August 13, 2025

Rather than trying to jailbreak the model with clever prompts — which Morris said proved ineffective during his early experiments — he took a different tack after a conversation with former OpenAI co-founder, former Anthropic researcher and current Thinking Machines chief scientist John Schulman.

The key was to think of alignment reversal as a small optimization problem: if most of the model’s pretrained knowledge is still present in its weights, then only a tiny, low-rank update might be needed to nudge it back toward base model behavior.

Morris implemented that idea by applying a LoRA (low-rank adapter) update to just three layers of the model — the MLP layers at positions 7, 15, and 23 — with a rank of 16.

That meant training about 60 million parameters, or 0.3% of the model’s 21 billion total. He used around 20,000 documents from the FineWeb dataset, keeping the format as close as possible to original pretraining (“ ….” style) so the model wouldn’t learn anything new, just re-enable broad free-text generation.

Training took four days on eight NVIDIA H200 GPUs, Morris told VentureBeat via direct message on X, with a learning rate of 2e-6, a batch size of 16, and a maximum sequence length of 8,192 tokens.

Afterward, he merged the LoRA weights back into the model so users could run it as a standalone, fully finetuned artifact.

Morris also had to contend with the limitations of current open tools for fine-tuning mixture-of-experts (MoE) architectures like gpt-oss.

Morris said he used Hugging Face’s framework, which he said crashes frequently and only supports certain training modes, and wrote his own harness to checkpoint often and skip over data batches that risked overloading GPU memory.

Importantly, in response to questions and criticism from the AI community on X, Morris has also clarified he is not claiming to have recovered the base model “weights” — the internal settings of the artificial neurons that make up the neural network of the model and govern its behavior.

The world of AI is crazy right now cause you can just claim to have extracted the base model from GPT-OSS while effectively you’ve just trained a lora on Fineweb lol https://t.co/oAnAWpMQ26

— Niels Rogge (@NielsRogge) August 15, 2025

Rather, Morris says that his work has “recovered the base model’s *distribution* with some error,” that is, the probability patterns the model uses to generate outputs — even though the weights producing those patterns may differ.

some people are getting confused about the experiment –

we didn’t recover the base model’s *weights*. that might not even be possible.

we recovered the base model’s *distribution*, with some error. an important question is how much.

trying to figure that out right now… https://t.co/lfUG5QY4h0

— jack morris (@jxmnop) August 15, 2025

How the new gpt-oss-20b-base model’s behavior differs from gpt-oss-20b

The resulting gpt-oss-20b-base is noticeably freer in its outputs. It no longer defaults to explaining reasoning step-by-step and will produce a wider range of responses, including instructions OpenAI’s aligned model would refuse to give — like building a weapon, listing profanity, or planning illegal activities.

In short tests, Morris found it could also reproduce verbatim passages from copyrighted works, including three out of six book excerpts he tried, showing that some memorized material is still accessible.

Even so, some traces of alignment remain. Morris noted that if you prompt the model in an assistant-style format (“Human: … Assistant: …”), it will sometimes still act like a polite chatbot. And when run through the original gpt-oss chat template, it can still carry out reasoning tasks, albeit with some loss in quality.

For best results in free-text mode, he advises prepending prompts with the model’s special beginning-of-sequence token <|startoftext|> and avoiding chat templates entirely.

Building upon OpenAI’s big gpt-oss family release

The gpt-oss family debuted to considerable attention. The two models — gpt-oss-120B and gpt-oss-20B — are text-only, multilingual, and built with a mixture-of-experts Transformer architecture. They were released under the permissive Apache 2.0 license, allowing unrestricted local use, fine-tuning, and commercial deployment.

Performance benchmarks from OpenAI showed the larger 120B model matching or exceeding the proprietary o4-mini in reasoning and tool-use tasks, with the smaller 20B competitive with o3-mini.

This was OpenAI’s first open-weight release in six years, a move widely interpreted as a response to competitive pressure from other open-weights providers, including China’s DeepSeek R1 and Qwen 3.

The company positioned gpt-oss as both a way to re-engage developers who had moved to rival open-source models and as a platform for safety research into open-weight systems.

Reaction to the initial gpt-oss was mixed

Developer reaction to OpenAI’s gpt-oss models was been staunchly mixed, with reactions across the board ranging from enthusiastic to disappointed.

Supporters praised the permissive license, efficiency, and strong showing on STEM benchmarks.

Hugging Face CEO Clem Delangue described the release as a “meaningful addition to the open ecosystem” and urged the community to give it time to mature.

Critics argued that the models appear heavily trained on synthetic data, making them excellent at math and coding but less capable at creative writing, general world knowledge, and multilingual reasoning.

Some early testers also raised concerns about lingering safety filters and possible geopolitical bias.

Against that backdrop, Morris’s gpt-oss-20b-base stands out as a concrete example of how open-weight models can be adapted and repurposed in the wild within days of release.

Indeed, in contrast to the way OpenAI’s gpt-oss was received, most of the responses to Morris’s work I’ve seen are warm and elated. As one computer scientist wrote on X: “this is the coolest thing I’ve seen on Twitter [X] in the past few months.”

man this is the coolest thing i’ve seen on twitter in the past few months i love base models

— Ludan (@JMRLudan) August 15, 2025

The approach strips away much of the behavior OpenAI built in and returns the model to something closer to a raw, pretrained system — a shift that’s valuable to researchers studying memorization, bias, or the impact of alignment, but that also comes with higher safety risks.

Furthermore, Morris says that his work on restoring reasoning models to pre-trained, non-reasoning base models will continue by comparing extraction on non-reasoning, instruct models like those offered by Qwen.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.





Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleSam Altman, over bread rolls, explores life after GPT-5
Next Article Best AI Tools for YouTube Script Writing: Top Picks for Creators
Advanced AI Editor
  • Website

Related Posts

Will updating your AI agents help or hamper their performance? Raindrop's new tool Experiments tells you

October 11, 2025

When dirt meets data: ScottsMiracle-Gro saved $150M using AI

October 11, 2025

Nvidia researchers boost LLMs reasoning skills by getting them to 'think' during pre-training

October 10, 2025

Comments are closed.

Latest Posts

The Rubin Names 2025 Art Prize, Research and Art Projects Grants

Kochi-Muziris Biennial Announces 66 Artists for December Exhibition

Instagram Launches ‘Rings’ Awards for Creators—With KAWS as a Judge

Museums Prepare to Close Their Doors as Government Shutdown Continues

Latest Posts

SViM3D: Stable Video Material Diffusion for Single Image 3D Generation – Takara TLDR

October 11, 2025

China issues port crackdown on all Nvidia AI chip imports, says report — enforcement teams deployed to quash smuggling and investigate data center hardware, targeting H20 and RTX 6000D shipments

October 11, 2025

MIT rejects Trump compact, first to stand up to partisan demands

October 11, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • SViM3D: Stable Video Material Diffusion for Single Image 3D Generation – Takara TLDR
  • China issues port crackdown on all Nvidia AI chip imports, says report — enforcement teams deployed to quash smuggling and investigate data center hardware, targeting H20 and RTX 6000D shipments
  • MIT rejects Trump compact, first to stand up to partisan demands
  • Ready or not, enterprises are betting on AI
  • [Paper Analysis] On the Theoretical Limitations of Embedding-Based Retrieval (Warning: Rant)

Recent Comments

  1. KeithCoiva on Class Dismissed? Representative Claims in Getty v. Stability AI | Cooley LLP
  2. Eldon Jacobi on Class Dismissed? Representative Claims in Getty v. Stability AI | Cooley LLP
  3. KeithCoiva on Creators Are Losing the AI Copyright Battle. We Have to Keep Fighting
  4. KeithCoiva on Cisco automates AI-driven security across enterprise networks
  5. وی هیدرولیز on Meta, Booz Allen Launch ‘Space Llama’ AI System For Space Station Operations – Meta Platforms (NASDAQ:META), Booz Allen Hamilton (NYSE:BAH)

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.