Google debuts AI model for robotics, challenging Meta, OpenAI

By Julia Love and Davey Alba | Bloomberg

Alphabet Inc.’s artificial intelligence lab is debuting two new models focused on robotics, which will help developers train robots to respond to unfamiliar scenarios — a longstanding challenge in the field.

Research unit Google DeepMind will release Gemini Robotics, a new branch of its flagship AI model aimed at developing robots that are more dexterous and interactive, it said Tuesday. Another model, Gemini Robotics-ER, specializes in spatial understanding, and will help robot makers build new programs using Gemini’s reasoning capabilities.

By applying Gemini to robots, Google is moving closer to developing “general purpose robotics” that can field a variety of tasks, DeepMind engineer Kanishka Rao said in a media briefing. “Our worlds are super messy and dynamic and rich, and I think a general purpose intelligent robot needs to be able to deal with that messiness.”

The Silicon Valley dream of building robots that can perform tasks on par with humans is attracting renewed attention and investment. Meta Platforms Inc., Tesla Inc. and OpenAI have ramped up their work on robotics, and startups are in talks to raise funding at sky-high valuations.

In a pre-taped demonstration on Tuesday, Google researchers showed how robots running on their technology responded to simple commands. One robot, standing before a smattering of letter tiles, spelled “Ace” after a trainer asked it to make a word.

Engineers also set out a miniature toy basketball court in the lab. Another robot, when asked to perform a dunk, pressed a small plastic ball through the hoop.

  Steph Curry can’t save All-Star Game, but he can win it

“The team was really excited when we first saw the robot dunk the basketball,” Rao said. “It’s because the robot has never ever seen anything related to basketball. It’s getting this general concept, understanding of what a basketball net looks like and what the word ‘slam dunk’ means from Gemini and is able to connect those to actually accomplish the task in the physical world.”

Google has a somewhat tortured history in robotics. More than a decade ago, the company acquired at least eight robotics companies to further cofounders Larry Page and Sergey Brin’s goal of developing consumer-oriented robots with the help of machine learning. As the years went by, the efforts coalesced within Google X, Alphabet’s moonshot lab, and in 2021 a unit spun out called Everyday Robots, which specialized in robots that completed daily tasks like sorting trash. About two years later, Alphabet announced it would shut down the unit as part of its sweeping 2023 budget cuts.

Still, Alphabet never fully exited the robotics business. At the time, the company said it would consolidate some of the technology and team into existing robotics efforts. Now, the company appears to be rebooting these efforts under the banner of generative AI.

  Your questions about cats and bird flu risk, answered

During the briefing, Google stressed that the work was in an “early exploration” phase. Vikas Sindhwani, a DeepMind research scientist, said the Gemini models had been developed with a “strong understanding of common sense safety” in physical environments. He said Google plans to deploy the robots gradually, starting at safe distances from humans and becoming more interactive and collaborative over time as safety performance improves.

Google said it would start exploring Gemini’s robotics capabilities with companies in the field, including Apptronik, which it is partnering with to develop humanoid robots. Other partners testing its Gemini Robotics-ER model include Agile Robots and Boston Dynamics — which Alphabet acquired in 2013, then later sold to SoftBank Group Corp.

More stories like this are available on bloomberg.com

©2025 Bloomberg L.P.

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *