View a PDF of the paper titled AGITB: A Signal-Level Benchmark for Evaluating Artificial General Intelligence, by Matej \v{S}progar
View PDF
HTML (experimental)
Abstract:Despite remarkable progress in machine learning, current AI systems continue to fall short of true human-like intelligence. While Large Language Models (LLMs) excel in pattern recognition and response generation, they lack genuine understanding – an essential hallmark of Artificial General Intelligence (AGI). Existing AGI evaluation methods fail to offer a practical, gradual, and informative metric. This paper introduces the Artificial General Intelligence Test Bed (AGITB), comprising twelve rigorous tests that form a signal-processing-level foundation for the potential emergence of cognitive capabilities. AGITB evaluates intelligence through a model’s ability to predict binary signals across time without relying on symbolic representations or pretraining. Unlike high-level tests grounded in language or perception, AGITB focuses on core computational invariants reflective of biological intelligence, such as determinism, sensitivity, and generalisation. The test bed assumes no prior bias, operates independently of semantic meaning, and ensures unsolvability through brute force or memorization. While humans pass AGITB by design, no current AI system has met its criteria, making AGITB a compelling benchmark for guiding and recognizing progress toward AGI.
Submission history
From: Matej Šprogar [view email]
[v1]
Sun, 6 Apr 2025 10:01:15 UTC (11 KB)
[v2]
Sun, 13 Apr 2025 10:03:26 UTC (12 KB)
[v3]
Fri, 9 May 2025 11:25:57 UTC (13 KB)