Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now
A new technique from Zhejiang University and Alibaba Group gives large language model (LLM) agents a dynamic memory, making them more efficient and effective at complex tasks. The technique, called Memp, provides agents with a “procedural memory” that is continuously updated as they gain experience, much like how humans learn from practice.
Memp creates a lifelong learning framework where agents don’t have to start from scratch for every new task. Instead, they become progressively better and more efficient as they encounter new situations in real-world environments, a key requirement for reliable enterprise automation.
The case for procedural memory in AI agents
LLM agents hold promise for automating complex, multi-step business processes. In practice, though, these long-horizon tasks can be fragile. The researchers point out that unpredictable events like network glitches, user interface changes or shifting data schemas can derail the entire process. For current agents, this often means starting over every time, which can be time-consuming and costly.
Meanwhile, many complex tasks, despite surface differences, share deep structural commonalities. Instead of relearning these patterns every time, an agent should be able to extract and reuse its experience from past successes and failures, the researchers point out. This requires a specific “procedural memory,” which in humans is the long-term memory responsible for skills like typing or riding a bike, that become automatic with practice.
AI Scaling Hits Its Limits
Power caps, rising token costs, and inference delays are reshaping enterprise AI. Join our exclusive salon to discover how top teams are:
Turning energy into a strategic advantage
Architecting efficient inference for real throughput gains
Unlocking competitive ROI with sustainable AI systems
Secure your spot to stay ahead: https://bit.ly/4mwGngO

Current agent systems often lack this capability. Their procedural knowledge is typically hand-crafted by developers, stored in rigid prompt templates or embedded within the model’s parameters, which are expensive and slow to update. Even existing memory-augmented frameworks provide only coarse abstractions and don’t adequately address how skills should be built, indexed, corrected and eventually pruned over an agent’s lifecycle.
Consequently, the researchers note in their paper, “there is no principled way to quantify how efficiently an agent evolves its procedural repertoire or to guarantee that new experiences improve rather than erode performance.”
How Memp works
Memp is a task-agnostic framework that treats procedural memory as a core component to be optimized. It consists of three key stages that work in a continuous loop: building, retrieving, and updating memory.
Memories are built from an agent’s past experiences, or “trajectories.” The researchers explored storing these memories in two formats: verbatim, step-by-step actions; or distilling these actions into higher-level, script-like abstractions. For retrieval, the agent searches its memory for the most relevant past experience when given a new task. The team experimented with different methods, such vector search, to match the new task’s description to past queries or extracting keywords to find the best fit.
The most critical component is the update mechanism. Memp introduces several strategies to ensure the agent’s memory evolves. As an agent completes more tasks, its memory can be updated by simply adding the new experience, filtering for only successful outcomes or, most effectively, reflecting on failures to correct and revise the original memory.

This focus on dynamic, evolving memory places Memp within a growing field of research aimed at making AI agents more reliable for long-term tasks. The work parallels other efforts, such as Mem0, which consolidates key information from long conversations into structured facts and knowledge graphs to ensure consistency. Similarly, A-MEM enables agents to autonomously create and link “memory notes” from their interactions, forming a complex knowledge structure over time.
However, co-author Runnan Fang highlights a critical distinction between Memp and other frameworks.
“Mem0 and A-MEM are excellent works… but they focus on remembering salient content within a single trajectory or conversation,” Fang commented to VentureBeat. In essence, they help an agent remember “what” happened. “Memp, by contrast, targets cross-trajectory procedural memory.” It focuses on “how-to” knowledge that can be generalized across similar tasks, preventing the agent from re-exploring from scratch each time.
“By distilling past successful workflows into reusable procedural priors, Memp raises success rates and shortens steps,” Fang added. “Crucially, we also introduce an update mechanism so that this procedural memory keeps improving— after all, practice makes perfect for agents too.”
Overcoming the ‘cold-start’ problem
While the concept of learning from past trajectories is powerful, it raises a practical question: How does an agent build its initial memory when there are no perfect examples to learn from? The researchers address this “cold-start” problem with a pragmatic approach.
Fang explained that devs can first define a robust evaluation metric instead of requiring a perfect “gold” trajectory upfront. This metric, which can be rule-based or even another LLM, scores the quality of an agent’s performance. “Once that metric is in place, we let state-of-the-art models explore within the agent workflow and retain the trajectories that achieve the highest scores,” Fang said. This process rapidly bootstraps an initial set of useful memories, allowing a new agent to get up to speed without extensive manual programming.
Memp in action
To test the framework, the team implemented Memp on top of powerful LLMs like GPT-4o, Claude 3.5 Sonnet and Qwen2.5, evaluating them on complex tasks like household chores in the ALFWorld benchmark and information-seeking in TravelPlanner. The results showed that building and retrieving procedural memory allowed an agent to distill and reuse its prior experience effectively.
During testing, agents equipped with Memp not only achieved higher success rates but became much more efficient. They eliminated fruitless exploration and trial-and-error, leading to a substantial reduction in both the number of steps and the token consumption required to complete a task.

One of the most significant findings for enterprise applications is that procedural memory is transferable. In one experiment, procedural memory generated by the powerful GPT-4o was given to a much smaller model, Qwen2.5-14B. The smaller model saw a significant boost in performance, improving its success rate and reducing the steps needed to complete tasks.
According to Fang, this works because smaller models often handle simple, single-step actions well but falter when it comes to long-horizon planning and reasoning. The procedural memory from the larger model effectively fills this capability gap. This suggests that knowledge can be acquired using a state-of-the-art model, then deployed on smaller, more cost-effective models without losing the benefits of that experience.
Toward truly autonomous agents
By equipping agents with memory-update mechanisms, the Memp framework allows them to continuously build and refine their procedural knowledge while operating in a live environment. The researchers found this endowed the agent with a “continual, almost linear mastery of the task.”
However, the path to full autonomy requires overcoming another hurdle: Many real-world tasks, such as producing a research report, lack a simple success signal. To continuously improve, an agent needs to know if it did a good job. Fang says the future lies in using LLMs themselves as judges.
“Today we often combine powerful models with hand-crafted rules to compute completion scores,” he notes. “This works, but hand-written rules are brittle and hard to generalize.”
An LLM-as-judge could provide the nuanced, supervisory feedback needed for an agent to self-correct on complex, subjective tasks. This would make the entire learning loop more scalable and robust, marking a critical step toward building the resilient, adaptable and truly autonomous AI workers needed for sophisticated enterprise automation.