Large language models demonstrate remarkable reasoning capabilities but often
produce unreliable or incorrect responses. Existing verification methods are
typically model-specific or domain-restricted, requiring significant
computational resources and lacking scalability across diverse reasoning tasks.
To address these limitations, we propose VerifiAgent, a unified verification
agent that integrates two levels of verification: meta-verification, which
assesses completeness and consistency in model responses, and tool-based
adaptive verification, where VerifiAgent autonomously selects appropriate
verification tools based on the reasoning type, including mathematical,
logical, or commonsense reasoning. This adaptive approach ensures both
efficiency and robustness across different verification scenarios. Experimental
results show that VerifiAgent outperforms baseline verification methods (e.g.,
deductive verifier, backward verifier) among all reasoning tasks. Additionally,
it can further enhance reasoning accuracy by leveraging feedback from
verification results. VerifiAgent can also be effectively applied to inference
scaling, achieving better results with fewer generated samples and costs
compared to existing process reward models in the mathematical reasoning
domain. Code is available at https://github.com/Jiuzhouh/VerifiAgent