View a PDF of the paper titled Memory Is Not the Bottleneck: Cost-Efficient Continual Learning via Weight Space Consolidation, by Dongkyu Cho and 4 other authors
View PDF
HTML (experimental)
Abstract:Continual learning (CL) has traditionally emphasized minimizing exemplar memory usage, assuming that memory is the primary bottleneck. However, in modern computing environments-particularly those involving large foundation models-memory is inexpensive and abundant, while GPU time constitutes the main cost. This paper re-examines CL under a more realistic setting with sufficient exemplar memory, where the system can retain a representative portion of past data. We find that, under this regime, stability improves due to reduced forgetting, but plasticity diminishes as the model becomes biased toward prior tasks and struggles to adapt to new ones. Notably, even simple baselines like naive replay can match or exceed the performance of state-of-the-art methods at a fraction of the computational cost. Building on this insight, we propose a lightweight yet effective method called Weight Space Consolidation, which directly operates in the model’s weight space via two core mechanisms: (1) rank-based parameter resets to recover plasticity, and (2) weight averaging to enhance stability. Our approach outperforms strong baselines across class-incremental learning with image classifiers and continual instruction tuning with large language models, while requiring only one-third to one-fourth of the training cost. These findings challenge long-standing CL assumptions and establish a new, cost-efficient baseline for real-world continual learning systems where exemplar memory is no longer the limiting factor.
Submission history
From: Dongkyu Cho [view email]
[v1]
Tue, 11 Feb 2025 05:40:52 UTC (223 KB)
[v2]
Thu, 20 Mar 2025 20:55:12 UTC (228 KB)
[v3]
Tue, 20 May 2025 20:59:50 UTC (516 KB)