
Google DeepMind has unveiled Gemini Robotics On-Device, a new artificial intelligence model that runs directly on robotic hardware without needing a constant internet connection. The announcement was made Tuesday in an official blog post by Carolina Parada, senior director and head of robotics at Google DeepMind.
Unlike cloud-reliant systems, this new model is designed to operate entirely locally, making it a valuable tool in real-world environments where speed and connectivity are crucial.
Gemini Robotics On-Device builds on the original Gemini Robotics model introduced in March. This latest version is tailored for bi-arm robots, offering lower-latency responses and robust task handling, even in network-restricted environments.
According to Google, the model can generalize well across different tasks and environments. It was shown performing complex actions like unzipping bags and handling unseen objects, and it reportedly adapts quickly to new instructions, requiring as few as 50 to 100 demonstrations for training.
Parada wrote, “Gemini Robotics On-Device achieves strong visual, semantic and behavioral generalization… while operating directly on the robot.”
Is the AI model adaptable to different robot bodies?
While initially trained for ALOHA robots, Google confirmed that the On-Device model has been successfully adapted for Franka FR3 and Apptronik’s Apollo humanoid. Even with these different designs, the AI model was able to follow natural language commands and complete precise tasks like folding dresses and performing belt assembly operations.
According to Google, this flexibility showcases how the same AI brain can be transferred across different robotic platforms with minimal adjustment.
To help developers experiment and fine-tune the model, Google is also rolling out the Gemini Robotics SDK. The toolkit allows users to test the AI in MuJoCo, a physics simulator built for robotic modeling. Access to the SDK is currently available to those enrolled in Google’s trusted tester program.
With this setup, developers can train the AI on custom tasks using real or simulated demonstrations. Google notes that the system is built to “support rapid experimentation” and can improve performance through fine-tuning.
How safe is it?
Google says safety is top of mind. The model is part of a larger safety-first initiative overseen by DeepMind’s Responsible Development & Innovation (ReDI) team and the Responsibility & Safety Council.
These groups ensure that every stage, from instruction processing to physical action, undergoes thorough testing to prevent unsafe behaviors. Safety benchmarks and “red-teaming” are recommended before deploying the model in real-world use.
How will this new AI model impact the industry?
With competitors like NVIDIA, Hugging Face, and South Korea’s RLWRLD also exploring robotics, Google’s move reinforces its position at the forefront of physical-world AI deployment. The introduction of Gemini Robotics On-Device may accelerate progress in autonomous machines for use in homes, factories, and even disaster zones.