View a PDF of the paper titled AntiGrounding: Lifting Robotic Actions into VLM Representation Space for Decision Making, by Wenbo Li and 3 other authors
View PDF
HTML (experimental)
Abstract:Vision-Language Models (VLMs) encode knowledge and reasoning capabilities for robotic manipulation within high-dimensional representation spaces. However, current approaches often project them into compressed intermediate representations, discarding important task-specific information such as fine-grained spatial or semantic details. To address this, we propose AntiGrounding, a new framework that reverses the instruction grounding process. It lifts candidate actions directly into the VLM representation space, renders trajectories from multiple views, and uses structured visual question answering for instruction-based decision making. This enables zero-shot synthesis of optimal closed-loop robot trajectories for new tasks. We also propose an offline policy refinement module that leverages past experience to enhance long-term performance. Experiments in both simulation and real-world environments show that our method outperforms baselines across diverse robotic manipulation tasks.
Submission history
From: Wenbo Li [view email]
[v1]
Sat, 14 Jun 2025 07:11:44 UTC (44,580 KB)
[v2]
Tue, 24 Jun 2025 10:01:18 UTC (44,579 KB)