View a PDF of the paper titled KBQA-o1: Agentic Knowledge Base Question Answering with Monte Carlo Tree Search, by Haoran Luo and 9 other authors
View PDF
HTML (experimental)
Abstract:Knowledge Base Question Answering (KBQA) aims to answer natural language questions with a large-scale structured knowledge base (KB). Despite advancements with large language models (LLMs), KBQA still faces challenges in weak KB awareness, imbalance between effectiveness and efficiency, and high reliance on annotated data. To address these challenges, we propose KBQA-o1, a novel agentic KBQA method with Monte Carlo Tree Search (MCTS). It introduces a ReAct-based agent process for stepwise logical form generation with KB environment exploration. Moreover, it employs MCTS, a heuristic search method driven by policy and reward models, to balance agentic exploration’s performance and search space. With heuristic exploration, KBQA-o1 generates high-quality annotations for further improvement by incremental fine-tuning. Experimental results show that KBQA-o1 outperforms previous low-resource KBQA methods with limited annotated data, boosting Llama-3.1-8B model’s GrailQA F1 performance to 78.5% compared to 48.5% of the previous sota method with GPT-3.5-turbo. Our code is publicly available.
Submission history
From: Haoran Luo [view email]
[v1]
Fri, 31 Jan 2025 06:59:49 UTC (2,099 KB)
[v2]
Tue, 27 May 2025 08:36:37 UTC (2,102 KB)