Google DeepMind has announced a breakthrough in robotics that could change the way machines interact with the world. The company’s latest AI upgrades give robots the ability to take on far more complex tasks while also tapping into digital tools like Google Search to guide their actions. Beyond that, the new system enables robots to share what they learn with other machines, regardless of their design or configuration.
The advancement comes through the launch of Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, the newest iterations of DeepMind’s robotics-focused AI models. The “ER” in the latter stands for embodied reasoning, which refers to the model’s ability to analyze situations, make decisions, and act in the physical world with foresight. These updates build on the original Gemini Robotics models that debuted earlier this year in March.
Carolina Parada, head of robotics at Google DeepMind, explained the significance of this leap. She noted that robots can now perform practical household tasks such as separating laundry into dark and light colors or even packing a suitcase based on the weather in London. “With this update, we’re now moving from one instruction to actually genuine understanding and problem-solving for physical tasks,” Parada said, as quoted by The Verge.
One of the most striking new features is the robots’ ability to consult the web for guidance. For example, if a robot is asked to sort waste into recyclables, compost, and trash, it can search online for location-specific recycling rules before deciding how to proceed. Previously, robots were adept at executing single, predefined instructions, but lacked the adaptability to reason through multi-step problems.
Here’s how the system works in practice: when given a command, the robot first leverages Gemini Robotics-ER 1.5 to interpret its surroundings and, if needed, access digital resources like Google Search. The information gathered is then translated into clear, step-by-step natural language instructions for Gemini Robotics 1.5, which executes the plan in the real world. This division of labor between the two models enables robots to both understand and act intelligently.
But perhaps the most transformative aspect lies in how these models allow robots to share skills. DeepMind engineers demonstrated that a task learned by an ALOHA2 robot, equipped with dual mechanical arms, could also be successfully performed by an Apptronik Apollo humanoid robot.
“This enables two things for us,” explained Google DeepMind engineer Kanishka Rao. “One is to control very different robots — including a humanoid — with a single model. And secondly, skills that are learned on one robot can now be transferred to another robot.”
The implications are vast. A future where one robot’s experience can instantly enhance the capabilities of countless others could accelerate how quickly robots become useful in homes, factories, and even hospitals. For now, these upgrades show how far AI has come from robots limited to simple commands — pushing them into the realm of genuine understanding and collaboration.