IE 11 is not supported. For an optimal experience visit our site on another browser.

California Robot Is Teaching Itself to Walk Like a Human Toddler

Darwin's baby steps speak to what many researchers believe will be the greatest leap in robotics.
Darwin
DarwinUC Berkeley Robot Learning Lab

Will robots soon be able to teach themselves ... everything?

There's a robot in California teaching itself to walk. Its name is Darwin, and like a toddler, it teeters back and forth, trying and falling, and then trying again before getting it right — all in a UC Berkeley lab.

But it's not actually Darwin doing all this. It's a neural network designed to mimic the human brain.

Darwin's baby steps speak to what many researchers believe will be the greatest leap in robotics — a kind of general machine learning that allows robots to adapt to new situations rather than respond to narrow programming.

Darwin
DarwinUC Berkeley Robot Learning Lab

Developed by Pieter Abbeel and his team at UC Berkeley's Robot Learning Lab, the neural network that allows Darwin to learn is not programmed to perform any specific functions, like walking or climbing stairs. The team is using what's called "reinforcement learning" to try and make the robots adapt to situations as a human child would.

Like a child's brain, reinforcement technology invokes the trial-and-error process.

"Imagine learning a new skill, like how to ride a bike," said John Schulman, a Ph.D. candidate in computer science at UC Berkeley in Abbeel's group. You're going to fall a lot, but then, "after some practice, you figure it out."

Related: 'Father of Robotics' Joseph Engelberger Dies at Age 90

Robots are pretty good at walking on flat ground, but anytime a variable is introduced, like a step or a slope, they often can't adapt.

Earlier this year, at the DARPA Robotics Challenge, some of the most high-tech robots in the world competed through a set of obstacles designed to mimic real-world disaster situations, like Fukushima. Nearly all of them failed, prompting a parade of GIFs on the Internet depicting falling robots.

For typically structured settings, like in factories, robots are programmed to repeat the same function over and over again, said Sergey Levine, another scientist working with Abbeel. For complex environments that might change, they need to be more sophisticated and able to adapt, Levine said.

To enable the robots to adapt, the team at UC Berkeley is developing technology that doesn't address specific behaviors.

"We've started looking at much less restrictive representations," Levine said. "We are basically not telling the robot anything about doing the task." Instead, they are using large neural networks that are general purpose. "It's kind of like the difference between a circuit built for one specific job," he explained, "and a general-purpose computer."

This approach enables the team to explore other functionalities, as well.

Related: Researchers Aim to Teach Robots How and When to Say 'No' to Humans

"There's very little in these algorithms that's specific to [locomotion]," Levine said. "In reality, these methods are really designed from the ground up to be general." They aren't aimed at walking, or grasping, or doing the dishes — but can be applied to all of those things.

Less restrictive technology is also apt to make robots cheaper to build.

"Right now if, for example, you have a company that builds robots, for every piece of hardware that you build, you also have to figure out how you are going to manually control it," Levine said. If a robot can learn on its own, the manual inputs needed for it to function would be fewer, thereby making it less costly to make.

In real scenarios, it's really difficult to anticipate every situation in advance, and it's nearly impossible to program for every situation, said Martial Hebert, a professor at The Robotics Institute at Carnegie Mellon University. "The grand challenge is to be able to teach robots how to do end-to-end tasks."

In an ideal world, a robot will be able to learn simply by demonstration, with no need for expensive, time-consuming programming, Herbert said, adding, "It will be much easier to configure them," he said.

That, in turn, could help lower the purchase price for robots, making them accessible to everyday consumers — which right now they aren't. Boston Dynamics' Atlas robot, used by several DARPA teams in the Challenge, carries a price tag of over $1 million.

Related: NASA Teams Up With Universities to Prep Robots for Space Exploration

"To get robots into our everyday environments will require equipping them with the ability to deal with a very large range of variation," Abbeel said. "My belief is that the most practical way to equip robots with such skills is to equip them with the ability to learn."

The scientists at UC Berkeley hope to move closer to a world where robots are autonomous, nimbly performing many functions typically done by humans. In the future, robots may be able to provide care for the elderly, conduct rescue efforts, clean up in disaster areas and even deliver mail, Schulman said.

There are still many situations that will need remote human control, like for operations that need to be executed very precisely, Hebert said. But the recent research suggests a new direction for the robotics field. "It's moving away from pre-programming of robots and toward robots that are more and more able to generalize from example," he said.

Abbeel's team is attempting to flesh out this shift. "More work is necessary to move these results from simulation to the real world, but I think eventually this research will have a very big impact on robotics," Schulman said. "It might be the path to actual humanoid robots, like Star Wars' C-3PO."