WorldVLA, an autoregressive action world model integrating vision-language-action (VLA) and world models, enhances performance through mutual understanding and generation, improving action prediction and sequence generation with an attention mask strategy.
We present WorldVLA, an autoregressive action world model that unifies action
and image understanding and generation. Our WorldVLA intergrates
Vision-Language-Action (VLA) model and world model in one single framework. The
world model predicts future images by leveraging both action and image
understanding, with the purpose of learning the underlying physics of the
environment to improve action generation. Meanwhile, the action model generates
the subsequent actions based on image observations, aiding in visual
understanding and in turn helps visual generation of the world model. We
demonstrate that WorldVLA outperforms standalone action and world models,
highlighting the mutual enhancement between the world model and the action
model. In addition, we find that the performance of the action model
deteriorates when generating sequences of actions in an autoregressive manner.
This phenomenon can be attributed to the model’s limited generalization
capability for action prediction, leading to the propagation of errors from
earlier actions to subsequent ones. To address this issue, we propose an
attention mask strategy that selectively masks prior actions during the
generation of the current action, which shows significant performance
improvement in the action chunk generation task.