View a PDF of the paper titled CHARTOM: A Visual Theory-of-Mind Benchmark for Multimodal Large Language Models, by Shubham Bharti and 7 other authors
View PDF
HTML (experimental)
Abstract:We introduce CHARTOM, a visual theory-of-mind benchmark for multimodal large language models. CHARTOM consists of specially designed data visualizing charts. Given a chart, a language model needs to not only correctly comprehend the chart (the FACT question) but also judge if the chart will be misleading to a human reader (the MIND question). Both questions have significant societal benefits. We detail the construction of the CHARTOM benchmark including its calibration on human performance. We benchmark leading LLMs as of late 2024 – including GPT, Claude, Gemini, Qwen, Llama, and Llava – on the CHARTOM dataset and found that our benchmark was challenging to all of them, suggesting room for future large language models to improve.
Submission history
From: Shubham Kumar Bharti [view email]
[v1]
Mon, 26 Aug 2024 17:04:23 UTC (1,774 KB)
[v2]
Fri, 9 May 2025 19:55:14 UTC (1,900 KB)