Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

California lawmaker behind SB 1047 reignites push for mandated AI safety reports

TU Wien Rendering #33 – Metropolis Light Transport

PixVerse AI Video Generator: Solving Creative Content Challenges with Advanced AIGC Technology in 2024 | AI News Detail

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
VentureBeat AI

Chinese researchers unveil MemOS, the first ‘memory operating system’ that gives AI human-like recall

By Advanced AI EditorJuly 9, 2025No Comments9 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

A team of researchers from leading institutions including Shanghai Jiao Tong University and Zhejiang University has developed what they’re calling the first “memory operating system” for artificial intelligence, addressing a fundamental limitation that has hindered AI systems from achieving human-like persistent memory and learning.

The system, called MemOS, treats memory as a core computational resource that can be scheduled, shared, and evolved over time — much like how traditional operating systems manage CPU and storage resources. The research, published July 4th on arXiv, demonstrates significant performance improvements over existing approaches, including a 159% boost in temporal reasoning tasks compared to OpenAI’s memory systems.

“Large Language Models (LLMs) have become an essential infrastructure for Artificial General Intelligence (AGI), yet their lack of well-defined memory management systems hinders the development of long-context reasoning, continual personalization, and knowledge consistency,” the researchers write in their paper.

AI systems struggle with persistent memory across conversations

Current AI systems face what researchers call the “memory silo” problem — a fundamental architectural limitation that prevents them from maintaining coherent, long-term relationships with users. Each conversation or session essentially starts from scratch, with models unable to retain preferences, accumulated knowledge, or behavioral patterns across interactions. This creates a frustrating user experience where an AI assistant might forget a user’s dietary restrictions mentioned in one conversation when asked about restaurant recommendations in the next.

While some solutions like Retrieval-Augmented Generation (RAG) attempt to address this by pulling in external information during conversations, the researchers argue these remain “stateless workarounds without lifecycle control.” The problem runs deeper than simple information retrieval — it’s about creating systems that can genuinely learn and evolve from experience, much like human memory does.

“Existing models mainly rely on static parameters and short-lived contextual states, limiting their ability to track user preferences or update knowledge over extended periods,” the team explains. This limitation becomes particularly apparent in enterprise settings, where AI systems are expected to maintain context across complex, multi-stage workflows that might span days or weeks.

New system delivers dramatic improvements in AI reasoning tasks

MemOS introduces a fundamentally different approach through what the researchers call “MemCubes” — standardized memory units that can encapsulate different types of information and be composed, migrated, and evolved over time. These range from explicit text-based knowledge to parameter-level adaptations and activation states within the model, creating a unified framework for memory management that previously didn’t exist.

Testing on the LOCOMO benchmark, which evaluates memory-intensive reasoning tasks, MemOS consistently outperformed established baselines across all categories. The system achieved a 38.98% overall improvement compared to OpenAI’s memory implementation, with particularly strong gains in complex reasoning scenarios that require connecting information across multiple conversation turns.

“MemOS (MemOS-0630) consistently ranks first in all categories, outperforming strong baselines such as mem0, LangMem, Zep, and OpenAI-Memory, with especially large margins in challenging settings like multi-hop and temporal reasoning,” according to the research. The system also delivered substantial efficiency improvements, with up to 94% reduction in time-to-first-token latency in certain configurations through its innovative KV-cache memory injection mechanism.

These performance gains suggest that the memory bottleneck has been a more significant limitation than previously understood. By treating memory as a first-class computational resource, MemOS appears to unlock reasoning capabilities that were previously constrained by architectural limitations.

The technology could reshape how businesses deploy artificial intelligence

The implications for enterprise AI deployment could be transformative, particularly as businesses increasingly rely on AI systems for complex, ongoing relationships with customers and employees. MemOS enables what the researchers describe as “cross-platform memory migration,” allowing AI memories to be portable across different platforms and devices, breaking down what they call “memory islands” that currently trap user context within specific applications.

Consider the current frustration many users experience when insights explored in one AI platform can’t carry over to another. A marketing team might develop detailed customer personas through conversations with ChatGPT, only to start from scratch when switching to a different AI tool for campaign planning. MemOS addresses this by creating a standardized memory format that can move between systems.

The research also outlines potential for “paid memory modules,” where domain experts could package their knowledge into purchasable memory units. The researchers envision scenarios where “a medical student in clinical rotation may wish to study how to manage a rare autoimmune condition. An experienced physician can encapsulate diagnostic heuristics, questioning paths, and typical case patterns into a structured memory” that can be installed and used by other AI systems.

This marketplace model could fundamentally alter how specialized knowledge is distributed and monetized in AI systems, creating new economic opportunities for experts while democratizing access to high-quality domain knowledge. For enterprises, this could mean rapidly deploying AI systems with deep expertise in specific areas without the traditional costs and timelines associated with custom training.

Three-layer design mirrors traditional computer operating systems

The technical architecture of MemOS reflects decades of learning from traditional operating system design, adapted for the unique challenges of AI memory management. The system employs a three-layer architecture: an interface layer for API calls, an operation layer for memory scheduling and lifecycle management, and an infrastructure layer for storage and governance.

The system’s MemScheduler component dynamically manages different types of memory — from temporary activation states to permanent parameter modifications — selecting optimal storage and retrieval strategies based on usage patterns and task requirements. This represents a significant departure from current approaches, which typically treat memory as either completely static (embedded in model parameters) or completely ephemeral (limited to conversation context).

“The focus shifts from how much knowledge the model learns once to whether it can transform experience into structured memory and repeatedly retrieve and reconstruct it,” the researchers note, describing their vision for what they call “Mem-training” paradigms. This architectural philosophy suggests a fundamental rethinking of how AI systems should be designed, moving away from the current paradigm of massive pre-training toward more dynamic, experience-driven learning.

The parallels to operating system development are striking. Just as early computers required programmers to manually manage memory allocation, current AI systems require developers to carefully orchestrate how information flows between different components. MemOS abstracts this complexity, potentially enabling a new generation of AI applications that can be built on top of sophisticated memory management without requiring deep technical expertise.

Researchers release code as open source to accelerate adoption

The team has released MemOS as an open-source project, with full code available on GitHub and integration support for major AI platforms including HuggingFace, OpenAI, and Ollama. This open-source strategy appears designed to accelerate adoption and encourage community development, rather than pursuing a proprietary approach that might limit widespread implementation.

“We hope MemOS helps advance AI systems from static generators to continuously evolving, memory-driven agents,” project lead Zhiyu Li commented in the GitHub repository. The system currently supports Linux platforms, with Windows and macOS support planned, suggesting the team is prioritizing enterprise and developer adoption over immediate consumer accessibility.

The open-source release strategy reflects a broader trend in AI research where foundational infrastructure improvements are shared openly to benefit the entire ecosystem. This approach has historically accelerated innovation in areas like deep learning frameworks and could have similar effects for memory management in AI systems.

Tech giants race to solve AI memory limitations

The research arrives as major AI companies grapple with the limitations of current memory approaches, highlighting just how fundamental this challenge has become for the industry. OpenAI recently introduced memory features for ChatGPT, while Anthropic, Google, and other providers have experimented with various forms of persistent context. However, these implementations have generally been limited in scope and often lack the systematic approach that MemOS provides.

The timing of this research suggests that memory management has emerged as a critical competitive battleground in AI development. Companies that can solve the memory problem effectively may gain significant advantages in user retention and satisfaction, as their AI systems will be able to build deeper, more useful relationships over time.

Industry observers have long predicted that the next major breakthrough in AI wouldn’t necessarily come from larger models or more training data, but from architectural innovations that better mimic human cognitive capabilities. Memory management represents exactly this type of fundamental advancement — one that could unlock new applications and use cases that aren’t possible with current stateless systems.

The development represents part of a broader shift in AI research toward more stateful, persistent systems that can accumulate and evolve knowledge over time — capabilities seen as essential for artificial general intelligence. For enterprise technology leaders evaluating AI implementations, MemOS could represent a significant advancement in building AI systems that maintain context and improve over time, rather than treating each interaction as isolated.

The research team indicates they plan to explore cross-model memory sharing, self-evolving memory blocks, and the development of a broader “memory marketplace” ecosystem in future work. But perhaps the most significant impact of MemOS won’t be the specific technical implementation, but rather the proof that treating memory as a first-class computational resource can unlock dramatic improvements in AI capabilities. In an industry that has largely focused on scaling model size and training data, MemOS suggests that the next breakthrough might come from better architecture rather than bigger computers.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleLearn how team intelligence drives better product-building at TC All Stage
Next Article OpenIA urges US to promote local AI funding to keep China at bay
Advanced AI Editor
  • Website

Related Posts

Scaling agentic AI: Inside Atlassian’s culture of experimentation

July 9, 2025

Hugging Face just launched a $299 robot that could disrupt the entire robotics industry

July 9, 2025

MCP isn’t KYC-ready: Why regulated sectors are wary of open agent exchanges

July 9, 2025

Comments are closed.

Latest Posts

Adam Lindemann to Close Venus Over Manhattan After 14 Years

Ed Sheeran Is Ripping Off Jackson Pollock with His Paintings

Art Basel Selects Artist Wael Shawky to Lead Forthcoming Qatar Fair

Pioneer Works Hosts a MSCHF Sculpture You Can Take Home by the Inch

Latest Posts

California lawmaker behind SB 1047 reignites push for mandated AI safety reports

July 9, 2025

TU Wien Rendering #33 – Metropolis Light Transport

July 9, 2025

PixVerse AI Video Generator: Solving Creative Content Challenges with Advanced AIGC Technology in 2024 | AI News Detail

July 9, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • California lawmaker behind SB 1047 reignites push for mandated AI safety reports
  • TU Wien Rendering #33 – Metropolis Light Transport
  • PixVerse AI Video Generator: Solving Creative Content Challenges with Advanced AIGC Technology in 2024 | AI News Detail
  • Are LLMs starting to become sentient?
  • Forget Chrome — this new AI browser is changing how people search the web

Recent Comments

  1. Account binance on itel debuts CITY series with CITY 100 new model: A stylish, durable & DeepSeek AI-powered smartphone for Gen Z

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.