View a PDF of the paper titled RuleArena: A Benchmark for Rule-Guided Reasoning with LLMs in Real-World Scenarios, by Ruiwen Zhou and 6 other authors
View PDF
HTML (experimental)
Abstract:This paper introduces RuleArena, a novel and challenging benchmark designed to evaluate the ability of large language models (LLMs) to follow complex, real-world rules in reasoning. Covering three practical domains — airline baggage fees, NBA transactions, and tax regulations — RuleArena assesses LLMs’ proficiency in handling intricate natural language instructions that demand long-context understanding, logical reasoning, and accurate mathematical computation. Two key attributes distinguish RuleArena from traditional rule-based reasoning benchmarks: (1) it extends beyond standard first-order logic representations, and (2) it is grounded in authentic, practical scenarios, providing insights into the suitability and reliability of LLMs for real-world applications. Our findings reveal several notable limitations in LLMs: (1) they struggle to identify and apply the appropriate rules, frequently becoming confused by similar but distinct regulations, (2) they cannot consistently perform accurate mathematical computations, even when they correctly identify the relevant rules, and (3) in general, they perform poorly in the benchmark. We also observe a significant performance boost when LLMs are provided with external tools for oracle math and logic operations. These results highlight significant challenges and promising research directions in advancing LLMs’ rule-guided reasoning capabilities in real-life applications. Our codes and data are publicly available on this https URL.
Submission history
From: Ruiwen Zhou [view email]
[v1]
Thu, 12 Dec 2024 06:08:46 UTC (327 KB)
[v2]
Fri, 30 May 2025 17:43:10 UTC (330 KB)