View a PDF of the paper titled Simple and Provable Scaling Laws for the Test-Time Compute of Large Language Models, by Yanxi Chen and 4 other authors
View PDF
HTML (experimental)
Abstract:We propose two simple, principled and practical algorithms that enjoy provable scaling laws for the test-time compute of large language models (LLMs). The first one is a two-stage knockout-style algorithm: given an input problem, it first generates multiple candidate solutions, and then aggregate them via a knockout tournament for the final output. Assuming that the LLM can generate a correct solution with non-zero probability and do better than a random guess in comparing a pair of correct and incorrect solutions, we prove theoretically that the failure probability of this algorithm decays to zero exponentially or by a power law (depending on the specific way of scaling) as its test-time compute grows. The second one is a two-stage league-style algorithm, where each candidate is evaluated by its average win rate against multiple opponents, rather than eliminated upon loss to a single opponent. Under analogous but more robust assumptions, we prove that its failure probability also decays to zero exponentially with more test-time compute. Both algorithms require a black-box LLM and nothing else (e.g., no verifier or reward model) for a minimalistic implementation, which makes them appealing for practical applications and easy to adapt for different tasks. Through extensive experiments with diverse models and datasets, we validate the proposed theories and demonstrate the outstanding scaling properties of both algorithms.
Submission history
From: Yanxi Chen [view email]
[v1]
Fri, 29 Nov 2024 05:29:47 UTC (348 KB)
[v2]
Fri, 7 Feb 2025 07:08:29 UTC (1,852 KB)
[v3]
Thu, 15 May 2025 14:06:27 UTC (1,034 KB)