MiDashengLM is an open audio-language model using general audio captions for efficient and comprehensive audio understanding, offering faster processing and higher throughput compared to existing models.
Current approaches for large audio language models (LALMs) often rely on
closed data sources or proprietary models, limiting their generalization and
accessibility. This paper introduces MiDashengLM, a novel open audio-language
model designed for efficient and comprehensive audio understanding through the
use of general audio captions using our novel ACAVCaps training dataset.
MiDashengLM exclusively relies on publicly available pretraining and supervised
fine-tuning (SFT) datasets, ensuring full transparency and reproducibility. At
its core, MiDashengLM integrates Dasheng, an open-source audio encoder,
specifically engineered to process diverse auditory information effectively.
Unlike previous works primarily focused on Automatic Speech Recognition (ASR)
based audio-text alignment, our strategy centers on general audio captions,
fusing speech, sound and music information into one textual representation,
enabling a holistic textual representation of complex audio scenes. Lastly,
MiDashengLM provides an up to 4x speedup in terms of time-to-first-token (TTFT)
and up to 20x higher throughput than comparable models. Checkpoints are
available online at https://huggingface.co/mispeech/midashenglm-7b and
https://github.com/xiaomi-research/dasheng-lm.