Multi-SpatialMLLM framework enhances MLLMs with multi-frame spatial understanding through depth perception, visual correspondence, and dynamic perception, achieving significant gains in multi-frame reasoning tasks.
Multi-modal large language models (MLLMs) have rapidly advanced in visual
tasks, yet their spatial understanding remains limited to single images,
leaving them ill-suited for robotics and other real-world applications that
require multi-frame reasoning. In this paper, we propose a framework to equip
MLLMs with robust multi-frame spatial understanding by integrating depth
perception, visual correspondence, and dynamic perception. Central to our
approach is the MultiSPA dataset, a novel, large-scale collection of more than
27 million samples spanning diverse 3D and 4D scenes. Alongside MultiSPA, we
introduce a comprehensive benchmark that tests a wide spectrum of spatial tasks
under uniform metrics. Our resulting model, Multi-SpatialMLLM, achieves
significant gains over baselines and proprietary systems, demonstrating
scalable, generalizable multi-frame reasoning. We further observe multi-task
benefits and early indications of emergent capabilities in challenging
scenarios, and showcase how our model can serve as a multi-frame reward
annotator for robotics.