With her sparkling blue eyes, wispy eyelashes and demure smile, Hertz is the center of attention wherever she goes.
If you’re lucky enough to meet her, try to ignore the tangle of wires slinking from behind her face. If you speak with her, talk slowly and loudly. And no matter what you say, don’t be offended if she looks at you blankly and repeatedly asks, “What did you say?”
Hertz isn’t really a she, but rather an it, an animated robot head built in about nine months by self-titled “sculptor roboticist” David Hanson.
Hanson and other robot makers believe social robots will one day serve a variety of functions: tutor, companion, even security guard.
But should they look human?
A friendly face in F'rubber
Hanson, who has worked as a designer, sculptor, and robotics developer for Walt Disney Imagineering, Universal Studios and MTV, thinks precise human looks are a must if people are going to effectively communicate with robots.
Like his previous project, K-bot, Hanson sculpted Hertz to resemble his girlfriend. It’s sheathed in a high-tech polymer Hanson invented called “f’rubber,” which resembles human skin. The face is embedded with tiny electronic motors, so Hertz can smile, frown or wrinkle its forehead.
For now, Hertz is a face mounted to a wooden stool, its disembodied brain a laptop computer. It has no arms, legs or body, although Hanson is planning those enhancements someday.
Hertz’s eyes have video cameras, enabling it to gaze at a human face and follow you around, provided you don’t move too quickly or beyond its limited field of vision. That and the limited speech skills are the extent of Hertz’s abilities.
Despite the embryonic state of his work, Hanson insists he’s on to something.
“Most people doing social robots believe that human faces will turn people off and will disturb them. I think that’s ridiculous,” Hanson said. “The human face is perhaps the most natural paradigm for us to interact with.”
Most experts disagree. They cite one of the principles of social robotics, the so-called “Uncanny Valley” theory.
First described by pioneering Japanese roboticist Masahiro Mori, the theory goes like this: Humans have a positive psychological reaction to robots that look somewhat like humans. But if a robot is made to look very realistic but somehow isn’t quite right (it has an odd smile, or it doesn’t blink, for example) it seems grotesque instead of comforting.
“Our experience has shown that people quickly lose the suspension of disbelief needed to interact with these creations once they start interacting with them for any length of time, because the artificial intelligence is not capable of producing human-level behavior,” said Reid Simmons, a researcher at Carnegie Mellon University’s Robotics Institute. “I strongly believe that this problem would be exacerbated by having a more humanly realistic robot.”
Science fiction has long taken different approaches to imbue robots with personal appeal.
In “Star Wars,” the blinks, blurps and beeps of R2-D2 were enough to give the trashcan-shaped machine a wide range of human emotions such as fear and excitement. There was the strikingly human, but emotionally clueless, Data from “Star Trek: Next Generation.” And in 2001’s “Artificial Intelligence: AI,” unblinking robot boy David Swinton yearned to become real so his flesh-and-blood mother would love him.
Hanson apart, most of today’s roboticists are taking Mori’s theory into consideration.
Sony Corp.’s QRIO robot looks like a young boy in a spacesuit, but Sony researchers say they didn’t want to make it too similar to a human.
“If your design is too close to human form, at a certain point it becomes just too uncanny,” Toshitada Doi, head of Sony’s Intelligent Dynamics Research Institute, says on Sony’s Web site.
Others include GRACE, short for Graduate Robot Attending a ConferencE. Built by Simmons and researchers at several other schools, GRACE’s “face” is a flat-screen television capable of displaying a range of emotions.
Kismet, a product of the Massachusetts Institute of Technology’s Humanoid Robotics Group, has exaggerated, fuzzy eyebrows, big blue eyes and floppy ears, but its face is mostly metal and plastic.
25 years from now?
Inventor and author Ray Kurzweil thinks Hanson’s work is significant because realistic facial movement will play an important role in the way future androids respond to humans.
First, however, robots will have to become significantly more intelligent, able to gauge the expressions of the people they encounter. Kurzweil estimates that we’ll begin to see this human level of artificial intelligence around 2029. Until then, he believes less-realistic robots will be more successful.
“If a robot has a face that is not human, then we are more accepting of less-than-human behavior, as we would with an animal or doll,” he said. “Intelligence significantly below that of normal humans stands out more with a robot that looks strikingly human. This creates the impression of a human with impaired intelligence, which may strike some as disturbing.”
Aliens and humans
For now, Hanson is taking a semester off from pursuing his doctoral thesis at the University of Texas-Dallas so he can tinker with his bots.
Most of the work on Hertz was done in his apartment and funded mostly with student loans. Last summer, Hanson formed a company, Human Emulation Robotics, with the hopes of raising venture capital.
“This is like a first step,” he said. “This looks like a monster because it is a severed head. But once you get used to it, it’s not. I haven’t proven that it’s not disturbing yet, but I have shown that it is captivating.”
No matter what, we can expected future social robots to be more alien than human, said Will Wright, creator of The Sims video games and a robot enthusiast.
“The fact is, I will share much more evolutionary history, and hence, brain circuitry and behavior, with my cat than I ever will with a machine intelligence,” he said. “The AIs we will be inventing soon will almost certainly be the first true alien intelligences humans will meet.”