Excited to present KUMO, a generative evaluation benchmark for LLMs. Unlike static benchmarks, KUMO dynamically generates diverse, multi-turn reasoning tasks with controllable difficulty—avoiding data leakage and ensuring trustworthy evaluation.
📄 Paper: https://arxiv.org/pdf/2504.02810
Why KUMO?
✅ 95%+ correlation with SOTA reasoning benchmarks—synthetic but realistic!
✅ Avoids test-set contamination (no risk of pre-training data leaks).
✅ Controllable difficulty & domain diversity for fine-grained evaluation.
Key Findings:
1️⃣ Simple vs. Complex Reasoning: LLMs outperform undergrads on easy tasks, but only deep-thinking models match humans on hard problems.
2️⃣ Universal Difficulty Metric: KUMO can standardize difficulty across benchmarks (LiveBench-Reason ≈ KUMO-Hard).
3️⃣ Domain Matters! Model performance varies widely across fields (medical, gaming, etc.)—knowledge structure is key.
4️⃣ Generalization Challenge: Fine-tuning on expert trajectories fails when KUMO’s tasks evolve, demanding strong OOD/domain/difficulty generalization.
🌐 Beyond KUMO: Generative evaluation is the future! Our earlier work on agent evaluation (https://arxiv.org/pdf/2310.08367) also shows how dynamic benchmarks can transform evaluation into a science.
💡 Join Us! KUMO is open-source with RL-friendly reward signals.