We introduce MRMR, the first expert-level multidisciplinary multimodal
retrieval benchmark requiring intensive reasoning. MRMR contains 1,502 queries
spanning 23 domains, with positive documents carefully verified by human
experts. Compared to prior benchmarks, MRMR introduces three key advancements.
First, it challenges retrieval systems across diverse areas of expertise,
enabling fine-grained model comparison across domains. Second, queries are
reasoning-intensive, with images requiring deeper interpretation such as
diagnosing microscopic slides. We further introduce Contradiction Retrieval, a
novel task requiring models to identify conflicting concepts. Finally, queries
and documents are constructed as image-text interleaved sequences. Unlike
earlier benchmarks restricted to single images or unimodal documents, MRMR
offers a realistic setting with multi-image queries and mixed-modality corpus
documents. We conduct an extensive evaluation of 4 categories of multimodal
retrieval systems and 14 frontier models on MRMR. The text embedding model
Qwen3-Embedding with LLM-generated image captions achieves the highest
performance, highlighting substantial room for improving multimodal retrieval
models. Although latest multimodal models such as Ops-MM-Embedding perform
competitively on expert-domain queries, they fall short on reasoning-intensive
tasks. We believe that MRMR paves the way for advancing multimodal retrieval in
more realistic and challenging scenarios.