View a PDF of the paper titled ConsistencyChecker: Tree-based Evaluation of LLM Generalization Capabilities, by Zhaochen Hong and 2 other authors
View PDF
HTML (experimental)
Abstract:Evaluating consistency in large language models (LLMs) is crucial for ensuring reliability, particularly in complex, multi-step interactions between humans and LLMs. Traditional self-consistency methods often miss subtle semantic changes in natural language and functional shifts in code or equations, which can accumulate over multiple transformations. To address this, we propose ConsistencyChecker, a tree-based evaluation framework designed to measure consistency through sequences of reversible transformations, including machine translation tasks and AI-assisted programming tasks. In our framework, nodes represent distinct text states, while edges correspond to pairs of inverse operations. Dynamic and LLM-generated benchmarks ensure a fair assessment of the model’s generalization ability and eliminate benchmark leakage. Consistency is quantified based on similarity across different depths of the transformation tree. Experiments on eight models from various families and sizes show that ConsistencyChecker can distinguish the performance of different models. Notably, our consistency scores-computed entirely without using WMT paired data-correlate strongly (r > 0.7) with WMT 2024 auto-ranking, demonstrating the validity of our benchmark-free approach. Our implementation is available at: this https URL.
Submission history
From: Zhaochen Hong [view email]
[v1]
Sat, 14 Jun 2025 07:18:33 UTC (198 KB)
[v2]
Tue, 17 Jun 2025 08:11:59 UTC (332 KB)