See examples and results at: https://leililab.github.io/HardTests/
RLVR is not just about RL, it’s more about VR!
Particularly for LLM coding, good verifiers (tests) are hard to get!
In our latest work, we ask 3 questions: How good are current tests? How do we get better tests? How much does test quality matter?
Current tests are BAD. Some of them are too easy to break inefficient programs. Others lack special judge functions for program outputs and mistake a right program for a wrong one. Combined, they create LOTS of false positives/negatives. So what do we do?
We propose HardTestGen, an LLM-based test synthesis pipeline that gets you much better tests than the ones that people often use, such as TACO. With that, we curate a problem set with 47k competition problems and good tests. But why should you care?
We propose HardTestGen, an LLM-based test synthesis pipeline that gets you much better tests than the ones that people often use, such as TACO. With that, we curate a problem set with 47k competition problems and good tests. But why should you care?
We run post-training experiments in 3 scenarios — teacher-distillation, self-distillation, and RL — to study when good tests matter. It turns out that they don’t, for teacher-distillation. However, they matter a great deal for self-distillation and RL.
Our problem set is now available at https://huggingface.co/datasets/sigcp/hardtests_problems, with the synthesis code and synthetic tests coming soon.