View a PDF of the paper titled Fine, I’ll Merge It Myself: A Multi-Fidelity Framework for Automated Model Merging, by Guinan Su and 1 other authors
View PDF
Abstract:Reasoning capabilities represent a critical frontier for large language models (LLMs), but developing them requires extensive proprietary datasets and computational resources. One way to efficiently supplement capabilities with is by model merging, which offers a promising alternative by combining multiple models without retraining. However, current merging approaches rely on manually-designed strategies for merging hyperparameters, limiting the exploration of potential model combinations and requiring significant human effort. We propose an Automated Model Merging Framework that enables fine-grained exploration of merging strategies while reducing costs through multi-fidelity approximations. We support both single and multi-objective optimization and introduce two novel search spaces: layerwise fusion (LFS) and depth-wise integration (DIS). Evaluating across a number of benchmarks, we find that the search autonomously finds 1) Merges that further boost single-objective performance, even on tasks the model has already been finetuned on, and 2) Merges that optimize multi-objective frontiers across tasks. Effective merges are found with limited compute, e.g. within less than 500 search steps.
Submission history
From: Guinan Su [view email]
[v1]
Thu, 6 Feb 2025 12:47:25 UTC (3,467 KB)
[v2]
Wed, 25 Jun 2025 14:44:30 UTC (3,317 KB)