Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Ottawa taps Cohere to work on use of AI in public service

WPP, Stability AI Form Strategic Alliance 03/06/2025

Qwen-Image Edit gives Photoshop a run for its money with AI-powered text-to-image edits that work in seconds

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Alibaba Cloud (Qwen)

Microsoft’s BitNet shows what AI can do with just 400MB and no GPU

By Advanced AI EditorApril 27, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


What just happened? Microsoft has introduced BitNet b1.58 2B4T, a new type of large language model engineered for exceptional efficiency. Unlike conventional AI models that rely on 16- or 32-bit floating-point numbers to represent each weight, BitNet uses only three discrete values: -1, 0, or +1. This approach, known as ternary quantization, allows each weight to be stored in just 1.58 bits. The result is a model that dramatically reduces memory usage and can run far more easily on standard hardware, without requiring the high-end GPUs typically needed for large-scale AI.

The BitNet b1.58 2B4T model was developed by Microsoft’s General Artificial Intelligence group and contains two billion parameters – internal values that enable the model to understand and generate language. To compensate for its low-precision weights, the model was trained on a massive dataset of four trillion tokens, roughly equivalent to the contents of 33 million books. This extensive training allows BitNet to perform on par with – or in some cases, better than – other leading models of similar size, such as Meta’s Llama 3.2 1B, Google’s Gemma 3 1B, and Alibaba’s Qwen 2.5 1.5B.

In benchmark tests, BitNet b1.58 2B4T demonstrated strong performance across a variety of tasks, including grade-school math problems and questions requiring common sense reasoning. In certain evaluations, it even outperformed its competitors.

What truly sets BitNet apart is its memory efficiency. The model requires just 400MB of memory, less than a third of what comparable models typically need. As a result, it can run smoothly on standard CPUs, including Apple’s M2 chip, without relying on high-end GPUs or specialized AI hardware.

This level of efficiency is made possible by a custom software framework called bitnet.cpp, which is optimized to take full advantage of the model’s ternary weights. The framework ensures fast and lightweight performance on everyday computing devices.

Standard AI libraries like Hugging Face’s Transformers don’t offer the same performance advantages as BitNet b1.58 2B4T, making the use of the custom bitnet.cpp framework essential. Available on GitHub, the framework is currently optimized for CPUs, but support for other processor types is planned in future updates.

The idea of reducing model precision to save memory isn’t new as researchers have long explored model compression. However, most past attempts involved converting full-precision models after training, often at the cost of accuracy. BitNet b1.58 2B4T takes a different approach: it is trained from the ground up using only three weight values (-1, 0, and +1). This allows it to avoid many of the performance losses seen in earlier methods.

This shift has significant implications. Running large AI models typically demands powerful hardware and considerable energy, factors that drive up costs and environmental impact. Because BitNet relies on extremely simple computations – mostly additions instead of multiplications – it consumes far less energy.

Microsoft researchers estimate it uses 85 to 96 percent less energy than comparable full-precision models. This could open the door to running advanced AI directly on personal devices, without the need for cloud-based supercomputers.

That said, BitNet b1.58 2B4T does have some limitations. It currently supports only specific hardware and requires the custom bitnet.cpp framework. Its context window – the amount of text it can process at once – is smaller than that of the most advanced models.

Researchers are still investigating why the model performs so well with such a simplified architecture. Future work aims to expand its capabilities, including support for more languages and longer text inputs.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleAmazon, Nvidia: AI Data Center Demand Shows No Signs of SlowingUS Stocks
Next Article ARCFOX Unveils T1 and 2025 ARCFOX Alpha S5 ULTRA at Auto Shanghai 2025, Featuring Deep Integration of DeepSeek Intelligent System
Advanced AI Editor
  • Website

Related Posts

Qwen-Image Edit gives Photoshop a run for its money with AI-powered text-to-image edits that work in seconds

August 20, 2025

Alibaba AI Momentum Builds As Qwen Coder Gains Market Share – Alibaba Gr Hldgs (NYSE:BABA), Baidu (NASDAQ:BIDU)

August 19, 2025

Alibaba’s AI coding model Qwen 3 Coder soars in popularity, challenging Claude Sonnet 4

August 19, 2025
Leave A Reply

Latest Posts

Barbara Hepworth Sculpture Will Remain in UK After £3.8 M. Raised

After 12-Year Hiatus, Egypt’s Alexandria Biennale Will Return

Ai Weiwei Visits Ukraine’s Front Line Ahead of Kyiv Installation

Maren Hassinger to Receive Her Largest Retrospective to Date Next Year

Latest Posts

Ottawa taps Cohere to work on use of AI in public service

August 20, 2025

WPP, Stability AI Form Strategic Alliance 03/06/2025

August 20, 2025

Qwen-Image Edit gives Photoshop a run for its money with AI-powered text-to-image edits that work in seconds

August 20, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Ottawa taps Cohere to work on use of AI in public service
  • WPP, Stability AI Form Strategic Alliance 03/06/2025
  • Qwen-Image Edit gives Photoshop a run for its money with AI-powered text-to-image edits that work in seconds
  • Vodafone Idea, IBM Launch AI Innovation Hub for Telecom Transformation
  • LLMs generate ‘fluent nonsense’ when reasoning outside their training zone

Recent Comments

  1. Xesodejax on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. Jimmyjaito on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. SamuelCoatt on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. Jimmyjaito on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. Jimmyjaito on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.