George the robot is playing hide-and-seek with scientist Alan Schultz. George whirrs and hides behind a post until he’s found.
Then a bit later, he hunts for and finds Schultz hiding.
If that sounds childish, consider that Schultz is working his way up to teaching the robot to play Capture the Flag.
What’s so impressive about robots playing children’s games?
For a robot to actually find a place to hide, and then hunt for its human playmate is a new level of human interaction. The machine must take cues from people and behave accordingly.
This is the beginning of a real robot revolution: giving robots some humanity.
“Robots in the human environment, to me that’s the final frontier,” said Cynthia Breazeal, robotic life group director at the Massachusetts Institute of Technology. “The human environment is as complex as it gets; it pushes the envelope.”
Robotics is moving from software and gears operating remotely — Mars, the bottom of the ocean or assembly lines — to finally working with, beside and even on people.
“Robots have to understand people as people,” Breazeal said. “Right now, the average robot understands people like a chair: It’s something to go around.”
The researchers who are injecting humanity into robotics are creating robots that can connect with humans in a more “thoughtful” way. They are building robot receptionists and robot physical therapists. They are finishing work on Huggable, a teddy bear robot line that will help monitor the mental and physical health of sick children for only a few thousand dollars apiece. Robots are coaxing autistic kids out of their shells. And there’s a cute penguin robot, Mel, that makes eye contact with people and nods when they talk.
The places we will first see these robots are in the most human-oriented fields — those that require special care in dealing with the elderly, young and disabled.
That’s why George’s game is important. As a machine, George is not a breakthrough. He’s an off-the-shelf robot reprogrammed at the Navy Center for Applied Research in Artificial Intelligence, which Schultz directs.
George moves with a bulky red wheel base and binoculars that gaze around the room below a computer screen with an animated face — complete with blinking blue eyes. What’s different is the way he interacts with people.
“George, go hide,” Schultz orders the robot in a cluttered room at the naval research lab. George’s “head” rotates around several times. Computer codes zip by on the monitor as George is thinking.
Finally, George announces in a mechanical, definitely non-human voice: “I will hide now.”
He ducks behind some boxes and declares: “I made it to the goal.”
Schultz finds George easily. George has a harder time spotting Schultz, but eventually succeeds.
For a child, this is nothing, but for a robot this should lead to a lot.
“We have only scratched the surface,” said Sebastian Thrun, the Stanford Artificial Intelligence Lab director who won the Defense Department’s Grand Challenge for a self-driving robot car through the desert last year. He predicted that 10 years from now robots will roam the health care system and that in our homes, multi-armed robots will be doing the cleaning. “There will be a lot of personalized devices,” he says.
That’s a big switch. The latest commercial home robots — the $280 vacuuming iRobot Roomba, with more than 2 million of the disc-shaped devices sold, and its floor-cleaning cousins — are designed to work best when people leave the room. But the promise of robots for scientists is represented by Rosie, the vacuuming robot of “The Jetsons” cartoon series, who dutifully works as Jane blithely walks by.
“If Rosie is going to be around and in your face, it would be good if the interaction is natural and easy,” says Rod Brooks, director of MIT’s artificial intelligence lab.
So after spending decades tinkering with wiring, some roboticists — a usually male and techno-geek-heavy field — did the unthinkable. They put aside their hardware and software, and studied how humans think, work together and communicate so they could apply that to robots.
The new field of human-robot interaction was born. Unlike the rest of robotics, many of its leaders are women. It has social scientists, language specialists, medical doctors and even ethicists who wonder if putting robots into places like nursing homes is the right thing to do.
That’s a big change from 50 years ago, when the field of artificial intelligence was created at a forum at Dartmouth University. The experts focused on puzzles and chess and skipped over concepts such as perception, a sense of where you are, what’s around you and how to interact.
“They all thought perception was easy — a 2-year-old could do that — but smart people play chess,” said Brooks, co-founder of iRobot Corp. “They all missed it and Hollywood missed it. The stuff a 2-year-old could do, that’s the hard stuff.”
One preschooler-type skill, the ability to take someone else’s perspective, “turned out to be a very important capability that we needed on our robots so that they could really work comfortably with humans,” said Schultz.
Thus, Schultz hopes in the next year or so to have a robot that could, like an old-time movie detective working a case, tail a person walking through the naval research lab campus unseen.
Similarly, researchers are working on teaching language-reasoning — not just dumping a dictionary in the robot’s database — gestures and eye contact so robots can understand the many ways people communicate. At NASA, astronauts are working with Schultz and a spacewalking-prototype called Robonaut to make machines understand when an astronaut points to something and says “there.”
We as humans understand that, but getting robots to put those clues together is proving to be a big leap, he said. And then there are subtle clues that humans pick up without even knowing it, such as nods and eye contact.
Research scientist Candy Sidner at the Mitsubishi Electric Research Lab in Cambridge, Mass., found that people respond better to more animated robots — those that nod, move and point. So she developed Mel, a pointing, nodding penguin robot. You nod at Mel, Mel nods back.
“It’s absolutely very compelling. People tell me, ’I like Mel because he’s really kind of cute,”’ Sidner said.
How should a robot look? There’s debate on that. On one extreme are the stroke-therapy robots of MIT scientists Neville Hogan and Hermano Igo Krebs. Those look like exercise machines with video game screens. They guide the arms and legs of paralyzed stroke patients through physical therapy, and the patients don’t even realize they are robots.
On the other end of the spectrum are David Hanson of Dallas and Osaka University’s professor Hiroshi Ishiguro whose robots look creepily human. Ishiguro’s robot Geminoid looks just like Ishiguro.
Such uncanny resemblances have led roboticists to coin the term “uncanny valley” syndrome. It suggests that people respond better to robots the closer they resemble humans — up to a point. If the resemblance is too good, people “are weirded out,” Sidner said. At that point, acceptance plummets. That’s why Sidner prefers her penguin robot.
Sherry Turkle at MIT worries about robots that seem too human.
“We’re cheap dates,” she says. “If an entity makes eye contact with you, if an entity reaches toward you in friendship, we believe there is somebody there ... But that doesn’t mean that there is. That just means that our Darwinian buttons are being pushed.”
Turkle, who directs the MIT Initiative on Technology and Self, fears people will be subconsciously tricked into giving robots more credit than they deserve. Her point is that when you are sick, hurt, or elderly, “you really do want a person,” not a robot.
Unfortunately, there’s a shortage of people working in nursing homes and caring for old people and the disabled, said Maja Mataric, director of the University of Southern California’s Center for Robotics and Embedded Systems. The average stroke victim gets 39 minutes of active exercise a day when six hours a day is needed, she said, so robots can free up the few nurses for more nurturing activities.
Mataric adjusts her robots’ personalities to fit the needs of stroke patients — nurturing buddy or goal-pushing coach.
And in the case of low-functioning autistic children, they actually seem to relate better to robots than humans, Mataric said. “You’ll see a child smile that has never smiled before. No one knows why it happens.”
The scientists trying to engineer robots to work with humans are learning more than they expected. They have a new appreciation for our own unique abilities.
Said Deb Roy, director of MIT’s Cognitive Machines Group: “It’s not until you try to build a machine that does the same task (that people do) ... that you realize how incredibly hard it is.”