The Phi-Ground model family achieves state-of-the-art performance in GUI grounding for multimodal reasoning models, improving accuracy across various benchmarks.
With the development of multimodal reasoning models, Computer Use Agents
(CUAs), akin to Jarvis from “Iron Man”, are becoming a reality. GUI
grounding is a core component for CUAs to execute actual actions, similar to
mechanical control in robotics, and it directly leads to the success or failure
of the system. It determines actions such as clicking and typing, as well as
related parameters like the coordinates for clicks. Current end-to-end
grounding models still achieve less than 65\% accuracy on challenging
benchmarks like ScreenSpot-pro and UI-Vision, indicating they are far from
being ready for deployment. % , as a single misclick can result in unacceptable
consequences. In this work, we conduct an empirical study on the training of
grounding models, examining details from data collection to model training.
Ultimately, we developed the Phi-Ground model family, which achieves
state-of-the-art performance across all five grounding benchmarks for models
under 10B parameters in agent settings. In the end-to-end model setting, our
model still achieves SOTA results with scores of \textbf{43.2} on
ScreenSpot-pro and \textbf{27.2} on UI-Vision. We believe that the
various details discussed in this paper, along with our successes and failures,
not only clarify the construction of grounding models but also benefit other
perception tasks. Project homepage:
https://zhangmiaosen2000.github.io/Phi-Ground/{https://zhangmiaosen2000.github.io/Phi-Ground/}