Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Clarifai’s new reasoning engine makes AI models faster and less expensive

The Reddit Debacle – A Response – Artificial Lawyer

SciReasoner: Laying the Scientific Reasoning Ground Across Disciplines – Takara TLDR

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Business AI
    • Advanced AI News Features
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Google DeepMind

Google DeepMind unveils new robotics AI model that can sort laundry

By Advanced AI EditorSeptember 26, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Stay informed with free updates

Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.

Google DeepMind has unveiled artificial intelligence models that further advance reasoning capabilities in robotics, enabling them to solve harder problems and complete more complicated real world tasks like sorting laundry and recycling rubbish.

The company’s new robotics models, called Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, are designed to help robots complete multi-step tasks by “thinking” before they act, as part of the tech industry’s push to make the general-purpose machines more useful in the everyday world.

According to Google DeepMind, a robot trained using its new model was able to plan how to complete tasks that might take several minutes, such as folding laundry into different baskets based on colour.

The development comes as tech groups, including OpenAI and Tesla, are racing to integrate AI models into robots in the hope that they could transform a range of industries, from healthcare to manufacturing.

“Models up to now were able to do really well at doing one instruction at a time,” said Carolina Parada, senior director and head of robotics at Google DeepMind. “We’re now moving from one instruction to actually genuine understanding and problem solving for physical tasks.” 

In March, Google DeepMind unveiled the first iteration of these models, which took advantage of the company’s Gemini 2.0 system to help robots adjust to different new situations, respond quickly to verbal instructions or changes in their environment, and be dexterous enough to manipulate objects.

While that version was able to reason how to complete tasks, such as folding paper or unzipping a bag, the latest model can follow a series of instructions and also use tools such as Google search to help it solve problems.

In one demonstration, a Google DeepMind researcher asked the robot to pack a beanie into her bag for a trip to London. The robot was also able to tell the researcher that it was going to rain for several days during the trip, and so the robot also packed an umbrella into the bag. 

The robot was also able to sort rubbish into appropriate recycling bins, by first using online tools to figure out it was based in San Francisco, and then searching the web for the city’s recycling guidelines.

The Gemini Robotics 1.5 is a vision-language-action model, which combines several different inputs and then translates them into action. These systems are able to learn about the world through data downloaded from the internet.

Ingmar Posner, professor of applied artificial intelligence at the University of Oxford, said learning from this kind of internet scale data could help robotics reach a “ChatGPT moment”.

But Angelo Cangelosi, co-director of the Manchester Centre for Robotics and AI, cautioned against calling what these robots are doing as real thinking. “It’s just discovering regularities between pixels, between images, between words, tokens, and so on,” he said.

Another development with Google DeepMind’s new system is a technique called “motion transfer”, which allows one AI model to use skills that were designed for a specific type of robot body, such as robotic arms, and transfer it to another, such as a humanoid robot.  

Recommended

RoboBallet was funded by DeepMind and Intrinsic

Traditionally, to get robots to move around in a space and take action requires plenty of meticulous planning and coding, and this training was often specific to a particular type of robot, such as robotic arms. This “motion transfer” breakthrough could help solve a major bottleneck in AI robotics development, which is the lack of enough training data. 

“Unlike large language models that can be trained on the entire vast internet of data, robotics has been limited by the painstaking process of collecting real [data for robots],” said Kanishka Rao, principal software engineer of robotics at Google DeepMind.

The company said it still needed to overcome a number of hurdles in the technology. This included creating the ability for robots to learn skills by watching videos of humans doing tasks.

It also said robots needed to become more dexterous as well as reliable and safe before they could be rolled out into environments where they interact with humans. 

“One of the major challenges of building general robots is that things that are intuitive for humans are actually quite difficult for robots,” said Rao.

Video: Agentic AI – how bots came for our workflows and drudgery | FT Working It



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleElon Musk’s xAI offers Grok to federal government for 42 cents 
Next Article Insta360 Unveils Wave, an AI Microphone-Speaker for Meetings and Collaboration · TechNode
Advanced AI Editor
  • Website

Related Posts

Will Google’s Vision-Action-Language Robots Rely On Internet Search? 09/25/2025

September 25, 2025

Google DeepMind unveils its first “thinking” robotics AI

September 25, 2025

Apple develops a lightweight AI for protein folding prediction

September 25, 2025

Comments are closed.

Latest Posts

Lisa Phillips, Longtime Director of New York’s New Museum, to Retire

Burmese Curator Flees Thailand After China Censors Art Exhibition

New Research Reveals Source for Dog in Rembrandt’s ‘Night Watch’

Treasures Recovered from Titanic Sister Ship Britannic Off Greek Coast

Latest Posts

Clarifai’s new reasoning engine makes AI models faster and less expensive

September 26, 2025

The Reddit Debacle – A Response – Artificial Lawyer

September 26, 2025

SciReasoner: Laying the Scientific Reasoning Ground Across Disciplines – Takara TLDR

September 26, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Clarifai’s new reasoning engine makes AI models faster and less expensive
  • The Reddit Debacle – A Response – Artificial Lawyer
  • SciReasoner: Laying the Scientific Reasoning Ground Across Disciplines – Takara TLDR
  • Nvidia’s $100 Billion OpenAI Pact Buys Time in the Custom Chip Race
  • MIT CSAIL researchers develop computer vision system for robots

Recent Comments

  1. siyekuorari on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  2. StevenAmusy on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. HowardLut on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. DavidHag on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  5. thưởng thức phim sex mới 2025 on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.