A reinforcement learning-guided training paradigm enhances large language models’ reasoning efficiency and performance for multi-hop questions by interleaving thinking and answering.
Long chain-of-thought (CoT) significantly enhances large language models’
(LLM) reasoning capabilities. However, the extensive reasoning traces lead to
inefficiencies and an increased time-to-first-token (TTFT). We propose a novel
training paradigm that uses reinforcement learning (RL) to guide reasoning LLMs
to interleave thinking and answering for multi-hop questions. We observe that
models inherently possess the ability to perform interleaved reasoning, which
can be further enhanced through RL. We introduce a simple yet effective
rule-based reward to incentivize correct intermediate steps, which guides the
policy model toward correct reasoning paths by leveraging intermediate signals
generated during interleaved reasoning. Extensive experiments conducted across
five diverse datasets and three RL algorithms (PPO, GRPO, and REINFORCE++)
demonstrate consistent improvements over traditional think-answer reasoning,
without requiring external tools. Specifically, our approach reduces TTFT by
over 80% on average and improves up to 19.3% in Pass@1 accuracy. Furthermore,
our method, trained solely on question answering and logical reasoning
datasets, exhibits strong generalization ability to complex reasoning datasets
such as MATH, GPQA, and MMLU. Additionally, we conduct in-depth analysis to
reveal several valuable insights into conditional reward modeling.