Recent advances in reinforcement learning (RL) have strengthened the
reasoning capabilities of vision-language models (VLMs). However, enhancing
policy exploration to more effectively scale test-time compute remains
underexplored in VLMs. In addition, VLMs continue to struggle with imperfect
visual perception, which in turn affects the subsequent reasoning process. To
this end, we propose NoisyRollout, a simple yet effective RL approach that
mixes trajectories from both clean and moderately distorted images to introduce
targeted diversity in visual perception and the resulting reasoning patterns.
Without additional training cost, NoisyRollout enhances the exploration
capabilities of VLMs by incorporating a vision-oriented inductive bias.
Furthermore, NoisyRollout employs a noise annealing schedule that gradually
reduces distortion strength over training, ensuring benefit from noisy signals
early while maintaining training stability and scalability in later stages.
With just 2.1K training samples, NoisyRollout achieves state-of-the-art
performance among open-source RL-tuned models on 5 out-of-domain benchmarks
spanning both reasoning and perception tasks, while preserving comparable or
even better in-domain performance.