New AI Lets Robots Develop Self-Knowledge and Adapt Like Living Beings

By simply watching themselves move, robots can now self-model, adapt to damage, and keep functioning—paving the way for smarter, more autonomous machines that require less human intervention.

A robot observes its reflection in a mirror, learning its own morphology and kinematics for autonomous self-simulation. The process highlights the intersection of vision-based learning and robotics, where the robot refines its movements and predicts its spatial motion through self-observation. Credit: Jane Nisselson/Columbia Engineering

A robot observes its reflection in a mirror, learning its own morphology and kinematics for autonomous self-simulation. The process highlights the intersection of vision-based learning and robotics, where the robot refines its movements and predicts its spatial motion through self-observation. Credit: Jane Nisselson/Columbia Engineering

A new study from researchers at Columbia Engineering reveals that robots can teach themselves about the structure and movement of their bodies by watching their motions with a camera. Equipped with this knowledge, the robots could not only plan their actions but also overcome damage to their bodies.

"Like humans learning to dance by watching their mirror reflection, robots now use raw video to build kinematic self-awareness," says study lead author Yuhang Hu, a doctoral student at the Creative Machines Lab at Columbia University, directed by Hod Lipson, James and Sally Scapa Professor of Innovation and chair of the Department of Mechanical Engineering. "Our goal is a robot that understands its own body, adapts to damage, and learns new skills without constant human programming."

Most robots learn to move in simulations. Once they can move in these virtual environments, they are released into the physical world, where they can continue to learn. "The better and more realistic the simulator, the easier it is for the robot to make the leap from simulation into reality," explains Lipson. 

However, creating a good simulator is an arduous process that typically requires skilled engineers. The researchers taught a robot how to create a simulator simply by watching its own motion through a camera. "This ability not only saves engineering effort but also allows the simulation to continue and evolve with the robot as it undergoes wear, damage, and adaptation," Lipson says.

In the new study, the researchers developed a way for robots to autonomously model their own 3D shapes using a single regular 2D camera. This breakthrough was driven by three brain-mimicking AI systems known as deep neural networks. These inferred 3D motion from 2D video, enabling the robot to understand and adapt to its own movements. The new system could also identify alterations to the robots' bodies, such as a bend in an arm, and help them adjust their motions to recover from this simulated damage. The researchers detailed their findings in the journal Nature Machine Intelligence.

Such adaptability might prove useful in a variety of real-world applications. For example, "imagine a robot vacuum or a personal assistant bot that notices its arm is bent after bumping into furniture," Hu says. "Instead of breaking down or needing repair, it watches itself, adjusts how it moves, and keeps working. This could make home robots more reliable-no constant reprogramming required."

Another scenario might involve a robot arm getting knocked out of alignment at a car factory. "Instead of halting production, it could watch itself, tweak its movements, and get back to welding, cutting downtime and costs," Hu says. This adaptability could make manufacturing more resilient."

Teaching Robots to Build Simulations of Themselves

As we hand over more critical functions to robots, from manufacturing to medical care, we need these robots to be more resilient. "We humans cannot afford to constantly baby these robots, repair broken parts and adjust performance. Robots need to learn to take care of themselves, if they are going to become truly useful," says Lipson. "That's why self-modeling is so important." 

The ability demonstrated in this study is the latest in a series of projects that the Columbia team has released over the past two decades, where robots are learning to become better at self-modeling using cameras and other sensors. 

In 2006, the research team's robots were able to use observations to only create simple stick-figure-like simulations of themselves. Robots began creating higher-fidelity models using multiple cameras about a decade ago. In this study, the robot was able to create a comprehensive kinematic model of itself using just a short video clip from a single regular camera, akin to looking in the mirror. The researchers call this newfound ability "Kinematic Self-Awareness." 

"We humans are intuitively aware of our body; we can imagine ourselves in the future and visualize the consequences of our actions well before we perform those actions in reality," explains Lipson. "Ultimately, we would like to imbue robots with a similar ability to imagine themselves because once you can imagine yourself in the future, there is no limit to what you can do."

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.