New Delhi: In a major leap for edge robotics, Google DeepMind has introduced Gemini Robotics On-Device, a new AI model that enables robots to function without needing an internet connection. This development brings greater autonomy, speed, and data privacy to real-world robotics, especially in locations where connectivity is limited or restricted.
Carolina Parada, head of robotics at Google DeepMind, described the release as a practical shift toward making robots more independent. “It’s small and efficient enough to run directly on a robot,” she told The Verge. “I would think about it as a starter model or as a model for applications that just have poor connectivity.”
Despite being a more compact version of its cloud-based predecessor, the on-device variant is surprisingly robust. “We’re actually quite surprised at how strong this on-device model is,” Parada added, pointing to its effectiveness even with minimal training.
The model can perform tasks almost immediately after deployment and requires only 50 to 100 demonstrations to learn new ones. Initially developed using Google’s ALOHA robot, it has since been adapted to other robotic systems including Apptronik’s Apollo humanoid and the dual-armed Franka FR3.
Tasks such as folding laundry or unzipping bags can now be executed entirely on-device, without latency caused by cloud interaction. This is a key differentiator compared to other advanced systems like Tesla’s Optimus, which still rely on cloud connectivity for processing.
The local processing aspect is a highlight for sectors that prioritize data security, such as healthcare or sensitive industrial settings. “When we play with the robots, we see that they’re surprisingly capable of understanding a new situation,” Parada noted, emphasizing the model’s flexibility and adaptability.
However, Google acknowledges some trade-offs. Unlike the cloud-based Gemini Robotics suite, the on-device model lacks built-in semantic safety tools. Developers are encouraged to implement safety mechanisms independently, using APIs like Gemini Live and integrating with low-level robotic safety systems. “With the full Gemini Robotics, you are connecting to a model that is reasoning about what is safe to do, period,” said Parada.
This announcement follows Google’s recent launch of the AI Edge Gallery, an Android-based app that lets users run generative AI models offline using the compact Gemma 3 1B model. Much like Gemini Robotics On-Device, this app focuses on privacy-first, low-latency experiences using frameworks like TensorFlow Lite and open-source models from Hugging Face.
Together, these launches signal Google’s broader move to decentralize AI, bringing high-performance intelligence directly to user devices—be it phones or robots.