Recent advances in reinforcement learning (RL)-based post-training have led to
notable improvements in large language models (LLMs), particularly in enhancing
their reasoning capabilities to handle complex tasks. However, most existing
methods treat the training data as a unified whole, overlooking the fact that modern
LLM training often involves a mixture of data from diverse distributions—varying
in both source and difficulty. This heterogeneity introduces a key challenge: how
to adaptively schedule training across distributions to optimize learning efficiency.
In this paper, we present a principled curriculum learning framework grounded in
the notion of distribution-level learnability. Our core insight is that the magnitude
of policy advantages reflects how much a model can still benefit from further
training on a given distribution. Based on this, we propose a distribution-level
curriculum learning framework for RL-based LLM post-training, which leverages
the Upper Confidence Bound (UCB) principle to dynamically adjust sampling
probabilities for different distrubutions. This approach prioritizes distributions with
either high average advantage (exploitation) or low sample count (exploration),
yielding an adaptive and theoretically grounded training schedule. We instantiate
our curriculum learning framework with GRPO as the underlying RL algorithm and
demonstrate its effectiveness on logic reasoning datasets with multiple difficulties
and sources. Our experiments show that our framework significantly improves
convergence speed and final performance, highlighting the value of distributionaware curriculum strategies in LLM post-training. Code: https://github.com/ZhentingWang/DUMP.