The ultimate goal of embodied agents is to create collaborators that can
interact with humans, not mere executors that passively follow instructions.
This requires agents to communicate, coordinate, and adapt their actions based
on human feedback. Recently, advances in VLAs have offered a path toward this
goal. However, most current VLA-based embodied agents operate in a one-way
mode: they receive an instruction and execute it without feedback. This
approach fails in real-world scenarios where instructions are often ambiguous.
In this paper, we address this problem with the Ask-to-Clarify framework. Our
framework first resolves ambiguous instructions by asking questions in a
multi-turn dialogue. Then it generates low-level actions end-to-end.
Specifically, the Ask-to-Clarify framework consists of two components, one VLM
for collaboration and one diffusion for action. We also introduce a connection
module that generates conditions for the diffusion based on the output of the
VLM. This module adjusts the observation by instructions to create reliable
conditions. We train our framework with a two-stage knowledge-insulation
strategy. First, we fine-tune the collaboration component using
ambiguity-solving dialogue data to handle ambiguity. Then, we integrate the
action component while freezing the collaboration one. This preserves the
interaction abilities while fine-tuning the diffusion to generate actions. The
training strategy guarantees our framework can first ask questions, then
generate actions. During inference, a signal detector functions as a router
that helps our framework switch between asking questions and taking actions. We
evaluate the Ask-to-Clarify framework in 8 real-world tasks, where it
outperforms existing state-of-the-art VLAs. The results suggest that our
proposed framework, along with the training strategy, provides a path toward
collaborative embodied agents.