View a PDF of the paper titled AGI-Elo: How Far Are We From Mastering A Task?, by Shuo Sun and 10 other authors
View PDF
HTML (experimental)
Abstract:As the field progresses toward Artificial General Intelligence (AGI), there is a pressing need for more comprehensive and insightful evaluation frameworks that go beyond aggregate performance metrics. This paper introduces a unified rating system that jointly models the difficulty of individual test cases and the competency of AI models (or humans) across vision, language, and action domains. Unlike existing metrics that focus solely on models, our approach allows for fine-grained, difficulty-aware evaluations through competitive interactions between models and tasks, capturing both the long-tail distribution of real-world challenges and the competency gap between current models and full task mastery. We validate the generalizability and robustness of our system through extensive experiments on multiple established datasets and models across distinct AGI domains. The resulting rating distributions offer novel perspectives and interpretable insights into task difficulty, model progression, and the outstanding challenges that remain on the path to achieving full AGI task mastery.
Submission history
From: Shuo Sun [view email]
[v1]
Mon, 19 May 2025 08:30:13 UTC (2,335 KB)
[v2]
Sat, 24 May 2025 05:25:10 UTC (2,335 KB)