View a PDF of the paper titled XGeM: A Multi-Prompt Foundation Model for Multimodal Medical Data Generation, by Daniele Molino and Francesco Di Feola and Eliodoro Faiella and Deborah Fazzini and Domiziana Santucci and Linlin Shen and Valerio Guarrasi and Paolo Soda
View PDF
HTML (experimental)
Abstract:The adoption of Artificial Intelligence in medical imaging holds great promise, yet it remains hindered by challenges such as data scarcity, privacy concerns, and the need for robust multimodal integration. While recent advances in generative modeling have enabled high-quality synthetic data generation, existing approaches are often limited to unimodal, unidirectional synthesis and therefore lack the ability to jointly synthesize multiple modalities while preserving clinical consistency. To address this challenge, we introduce XGeM, a 6.77-billion-parameter multimodal generative model designed to support flexible, any-to-any synthesis between medical data modalities. XGeM constructs a shared latent space via contrastive learning and introduces a novel Multi-Prompt Training strategy, enabling conditioning on arbitrary subsets of input modalities. This design allows the model to adapt to heterogeneous clinical inputs and generate multiple outputs jointly, preserving both semantic and structural coherence. We extensively validate XGeM: first we benchmark it against five competitors on the MIMIC-CXR dataset, a state-of-the-art dataset for multi-view Chest X-ray and radiological report generation. Secondly, we perform a Visual Turing Test with expert radiologists to assess the realism and clinical relevance of the generated data, ensuring alignment with real-world scenarios. Finally, we show how XGeM can support key medical data challenges such as anonymization, class imbalance, and data scarcity, underscoring its utility as a foundation model for medical data synthesis. Project page is at this https URL.
Submission history
From: Daniele Molino [view email]
[v1]
Wed, 8 Jan 2025 16:53:56 UTC (1,094 KB)
[v2]
Thu, 9 Jan 2025 08:42:56 UTC (1,094 KB)
[v3]
Thu, 3 Jul 2025 07:57:05 UTC (7,291 KB)