×
Google’s Gemini-Powered Robot Navigates Offices, Follows Complex Commands
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Google DeepMind has unveiled a chatbot-powered robot capable of navigating an office environment and following complex verbal and visual instructions, demonstrating the potential for large language models to enable more intelligent and useful physical machines.

Gemini chatbot upgrade enables advanced robot capabilities: Google DeepMind’s robot leverages the latest version of the company’s Gemini large language model to understand commands and navigate its surroundings:

  • The robot can parse complex verbal instructions like “Find me somewhere to write” and lead a person to an appropriate location, such as a whiteboard.
  • Gemini’s ability to handle video and text input, combined with pre-recorded video tours of the office, allows the robot to reason about its environment and navigate accurately.
  • When given a command like “Where did I leave my coaster?”, the robot proved up to 90% reliable at finding the correct location.

Integrating language models with robotics algorithms: The Google helper robot combines the Gemini language model with an algorithm that generates specific actions for the robot to take in response to commands and its visual input.

  • This integration of natural language processing and robotics enables more intuitive human-robot interaction and greatly improves the robot’s usability.
  • Researchers plan to test the system on different types of robots and believe Gemini will be able to handle even more complex questions that require contextual understanding.

A growing trend in AI-powered robotics research: Google DeepMind’s demonstration is part of a larger movement in both academia and industry to explore how large language models can enhance the capabilities of physical machines.

  • The recent International Conference on Robotics and Automation featured nearly two dozen papers on using vision language models in robotics.
  • Startups like Physical Intelligence and Skild AI have raised significant funding to develop robots with general problem-solving abilities by combining large language models with real-world training.

Analyzing Deeper: While the Google DeepMind robot showcases impressive navigation and reasoning skills, it operates within the controlled environment of an office space. Adapting this technology to more complex and unpredictable real-world settings will likely present additional challenges. Moreover, as language models become increasingly integral to robotics, ensuring the safety, reliability, and transparency of these systems will be crucial. Nonetheless, the rapid advancements in AI-powered robotics hint at a future where intelligent machines can more seamlessly assist and collaborate with humans in various domains.

Google DeepMind's Chatbot-Powered Robot Is Part of a Bigger Revolution

Recent News

Super Micro stock surges as company extends annual report deadline

Super Micro Computer receives filing extension from Nasdaq amid strong AI server sales, giving the manufacturer until February to resolve accounting delays.

BlueDot’s AI crash course may transform your career in just 5 days

Demand surges for specialized training programs that teach AI safety fundamentals as tech companies seek experts who can manage risks in artificial intelligence development.

Salesforce expands UAE presence with new Dubai AI hub

Salesforce expands its footprint in Dubai as the UAE advances its digital transformation agenda and emerges as a regional technology hub.