View a PDF of the paper titled Enhancing Multimodal Unified Representations for Cross Modal Generalization, by Hai Huang and 9 other authors
View PDF
HTML (experimental)
Abstract:To enhance the interpretability of multimodal unified representations, many studies have focused on discrete unified representations. These efforts typically start with contrastive learning and gradually extend to the disentanglement of modal information, achieving solid multimodal discrete unified representations. However, existing research often overlooks two critical issues: 1) The use of Euclidean distance for quantization in discrete representations often overlooks the important distinctions among different dimensions of features, resulting in redundant representations after quantization; 2) Different modalities have unique characteristics, and a uniform alignment approach does not fully exploit these traits. To address these issues, we propose Training-free Optimization of Codebook (TOC) and Fine and Coarse cross-modal Information Disentangling (FCID). These methods refine the unified discrete representations from pretraining and perform fine- and coarse-grained information disentanglement tailored to the specific characteristics of each modality, achieving significant performance improvements over previous state-of-the-art models. The code is available at this https URL.
Submission history
From: Hai Huang [view email]
[v1]
Fri, 8 Mar 2024 09:16:47 UTC (4,966 KB)
[v2]
Sat, 17 May 2025 09:14:04 UTC (7,235 KB)
[v3]
Sun, 1 Jun 2025 05:09:27 UTC (6,490 KB)