Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

ROSE: Remove Objects with Side Effects in Videos – Takara TLDR

DeepSeek Fuels Return to Profit for Chinese Tech Champion Huawei

Anthropic on using Claude user data for training AI: Privacy policy explained

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Nous Research drops Hermes 4 AI models that outperform ChatGPT without content restrictions

By Advanced AI EditorAugust 29, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

Nous Research, a secretive artificial intelligence startup that has emerged as a leading voice in the open-source AI movement, quietly released Hermes 4 on Monday, a family of large language models that the company claims can match the performance of leading proprietary systems while offering unprecedented user control and minimal content restrictions.

The release represents a significant escalation in the battle between open-source AI advocates and major technology companies over who should control access to advanced artificial intelligence capabilities. Unlike models from OpenAI, Google, or Anthropic, Hermes 4 is designed to respond to nearly any request without the safety guardrails that have become standard in commercial AI systems.

Nous Research presents Hermes 4, our latest line of hybrid reasoning models.https://t.co/E5EW9hBurb

Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities.

Special attention was given to making the models creative and interesting to… pic.twitter.com/52VjnvrDWM

— Nous Research (@NousResearch) August 26, 2025

“Hermes 4 builds on our legacy of user-aligned models with expanded test-time compute capabilities,” Nous Research announced on X (formerly Twitter). “Special attention was given to making the models creative and interesting to interact with, unencumbered by censorship, and neutrally aligned while maintaining state of the art level math, coding, and reasoning performance for open weight models.”

How Hermes 4’s ‘hybrid reasoning’ mode outperforms ChatGPT and Claude on math benchmarks

Hermes 4 introduces what Nous Research calls “hybrid reasoning,” allowing users to toggle between fast responses and deeper, step-by-step thinking processes. When activated, the models generate their internal reasoning within special tags before providing a final answer — similar to OpenAI’s o1 reasoning models but with full transparency into the AI’s thought process.

AI Scaling Hits Its Limits

Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:

Turning energy into a strategic advantage

Architecting efficient inference for real throughput gains

Unlocking competitive ROI with sustainable AI systems

Secure your spot to stay ahead: https://bit.ly/4mwGngO

The technical achievement is substantial. In testing, Hermes 4’s largest 405-billion parameter model scored 96.3% on the MATH-500 benchmark in reasoning mode and 81.9% on the challenging AIME’24 mathematics competition — performance that rivals or exceeds many proprietary systems costing millions more to develop.

“The challenge is making thinking traces useful and verifiable without runaway reasoning,” noted AI researcher Rohan Paul on X, highlighting one of the technical breakthroughs in the release.

Perhaps most notably, Hermes 4 achieved the highest score among all tested models on “RefusalBench,” a new benchmark Nous Research created to measure how often AI systems refuse to answer questions. The model scored 57.1% in reasoning mode, significantly outperforming GPT-4o (17.67%) and Claude Sonnet 4 (17%).

Hermes 4 models from Nous Research answered significantly more questions than competing AI systems on RefusalBench, a test measuring how often models refuse to respond to user requests. (Credit: Nous Research)

Inside DataForge and Atropos: The breakthrough training systems behind Hermes 4’s capabilities

Behind Hermes 4’s capabilities lies a sophisticated training infrastructure that Nous Research has developed over several years. The models were trained using two novel systems: DataForge, a graph-based synthetic data generator, and Atropos, an open-source reinforcement learning framework.

DataForge creates training data through what the company describes as “random walks” through directed graphs, transforming simple pre-training data into complex instruction-following examples. The system can, for instance, take a Wikipedia article and transform it into a rap song, then generate questions and answers based on that transformation.

Atropos, meanwhile, operates like hundreds of specialized training environments where AI models practice specific skills—mathematics, coding, tool use, and creative writing—receiving feedback only when they produce correct solutions. This “rejection sampling” approach ensures that only verified, high-quality responses make it into the training data.

Atropos is Nous’ Reinforcement Learning framework

Atropos is an open source reinforcement learning environment by Nous that has hundreds of “gyms” (like math, coding, games, tool‑use, vision) to train and evaluate LLM trajectories via scalable, async RL loops.

In other words… pic.twitter.com/fjxaQKClEZ

— Tommy (@Shaughnessy119) August 26, 2025

“Nous used these environments to generate the dataset for Hermes 4!” explained Tommy Shaughnessy, a venture capitalist at Delphi Ventures who has invested in Nous Research. “All in the dataset contains 3.5 million reasoning samples and 1.6 million non-reasoning samples! Hermes was trained on RL data, not just static datasets of question and answer!”

The training process required 192 Nvidia B200 GPUs and 71,616 GPU hours for the largest model — a significant but not unprecedented computational investment that demonstrates how specialized techniques can compete with the massive scale of tech giants.

Why Nous Research believes AI safety guardrails are ‘annoying as hell’ and hurt innovation

Nous Research has built its reputation on a philosophy that puts user control above corporate content policies. The company’s models are designed to be “steerable,” meaning they can be fine-tuned or prompted to behave in specific ways without the rigid safety constraints that characterize commercial AI systems.

“Hermes 4 is not shackled by disclaimers, rules and being overly cautious which is annoying as hell and hurts innovation and usability,” wrote Shaughnessy in a detailed thread analyzing the release. “If its open source but refuses all requests its pointless. Not an issue with Hermes 4.”

Hermes 4 is not shackled by disclaimers, rules and being overly cautious which is annoying as hell and hurts innovation and usability.

Hermes 4 70B is at the complete opposite of the spectrum vs OpenAI’s open source model. It’s also ~4x more open vs ChatGPT 4o!

If its open… pic.twitter.com/q5RpX1oOzo

— Tommy (@Shaughnessy119) August 26, 2025

This approach has made Nous Research popular among AI researchers and developers who want maximum flexibility, but it also places the company at the center of ongoing debates about AI safety and content moderation. While the models can theoretically be used for harmful purposes, Nous Research argues that transparency and user control are preferable to corporate gatekeeping.

The company’s technical report, released alongside the models, provides unprecedented detail about the training process, evaluation results, and even the actual text outputs from benchmark tests. “We believe this report sets a new standard for transparency in benchmarking,” the company stated.

How a small startup with 192 GPUs is competing against Big Tech’s billion-dollar AI budgets

Hermes 4‘s release comes at a pivotal moment in the AI industry. While major technology companies have poured billions into developing increasingly powerful AI systems, a growing open-source movement argues that these capabilities should not be controlled by a handful of corporations.

Recent months have seen significant advances in open-source AI, with models like Meta’s Llama 3.1, DeepSeek’s R1, and Alibaba’s Qwen series achieving performance that rivals proprietary systems. Hermes 4 represents another step in this progression, particularly in the area of reasoning—long considered a strength of closed systems like OpenAI’s o1.

“First up, Nous is a startup with dozens of extremely talented people,” noted Shaughnessy. “They do not have the $100b+ annual capex spend of a hyperscaler nor 1,000’s of employees and despite that they continue to put out innovative models and research at an insane pace.”

The startup, which raised $65 million in funding earlier this year led by Paradigm, has also been developing Psyche Network, a distributed training system that aims to coordinate AI training across internet-connected computers using blockchain technology.

The technical fix that stopped Hermes 4 from thinking in endless loops

One of Hermes 4‘s most significant technical contributions addresses a problem plaguing reasoning models: overly long thinking processes. The researchers found that their smaller 14-billion parameter model would reach maximum context length 60% of the time when reasoning, essentially getting stuck in endless loops of thinking.

Their solution involved a second training stage that teaches models to stop reasoning at exactly 30,000 tokens, reducing overlong generation by 65-79% while maintaining most of the reasoning performance. This “length control” technique could prove valuable for the broader AI research community.

“Smaller models (<14B) tend to overthink when distilled, but larger models don’t,” observed AI researcher Muyu He on X, highlighting insights from the technical report.

However, Hermes 4 still faces limitations common to open-source models. Despite impressive benchmark performance, the models require significant computational resources to run and may not match the ease of use or reliability of commercial AI services for many applications.

Where to try Hermes 4 and what it costs compared to ChatGPT and Claude

Nous Research has made Hermes 4 available through multiple channels, reflecting the open-source philosophy. The model weights are freely downloadable on Hugging Face, while the company also offers API access through its revamped chat interface and partnerships with inference providers like Chutes, Nebius, and Luminal.

“You can try Hermes 4 in the new, revamped Nous Chat UI,” the company announced, highlighting features like parallel interactions and a memory system.

For enterprise users and researchers, the models represent a potentially attractive alternative to paying for API access to proprietary systems, especially for applications requiring high levels of customization or handling of sensitive content.

The bigger picture: What Hermes 4 means for the future of AI development

The release of Hermes 4 represents more than just another AI model launch — it’s a statement about who should control the future of artificial intelligence. In an industry increasingly dominated by a handful of tech giants with virtually unlimited resources, Nous Research has demonstrated that innovation can still come from unexpected places.

The company’s approach raises fundamental questions about the trade-offs between safety and capability, between corporate control and user freedom. While major technology companies argue that careful content moderation and safety guardrails are essential for responsible AI deployment, Nous Research contends that transparency and user agency are more important than corporate-imposed restrictions.

Whether this philosophy will ultimately prove beneficial or problematic remains to be seen. But one thing is certain: Hermes 4 has shown that the future of AI won’t be determined solely by the companies with the deepest pockets.

In a field where yesterday’s impossibilities become tomorrow’s commodities, Nous Research just proved that the only thing more dangerous than an AI that says no might be one that’s willing to say yes.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.





Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleIntroducing gpt-realtime in the API
Next Article MIT roboticists debate the future of robotics, data, and computing
Advanced AI Editor
  • Website

Related Posts

Forget data labeling: Tencent’s R-Zero shows how LLMs can train themselves

August 29, 2025

Nvidia’s strong Q2 results can’t mask the ASIC challenge in their future

August 29, 2025

In crowded voice AI market, OpenAI bets on instruction-following and expressive speech to win enterprise adoption

August 29, 2025

Comments are closed.

Latest Posts

London Museum Secures Banksy’s Piranhas

Egyptian Antiquities Trafficker Sentenced to Six Months in Prison

Sotheby’s to Launch First Series of Luxury Auctions in Abu Dhabi

Nazi-Looted Painting Turns Up in Argentinean Real Estate Listing

Latest Posts

ROSE: Remove Objects with Side Effects in Videos – Takara TLDR

August 29, 2025

DeepSeek Fuels Return to Profit for Chinese Tech Champion Huawei

August 29, 2025

Anthropic on using Claude user data for training AI: Privacy policy explained

August 29, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • ROSE: Remove Objects with Side Effects in Videos – Takara TLDR
  • DeepSeek Fuels Return to Profit for Chinese Tech Champion Huawei
  • Anthropic on using Claude user data for training AI: Privacy policy explained
  • MIT startup Commonwealth Fusion Systems raises $863 million
  • Forget data labeling: Tencent’s R-Zero shows how LLMs can train themselves

Recent Comments

  1. casino guru Lt on A Library of LLM Intrinsics for Retrieval-Augmented Generation
  2. JoshuaBib on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. DavidVox on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. RobertSam on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Xywepfeell on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.