Image: Face animation
Animation software turns a graphic representation of a face or a figure into textured artwork -- but it still requires motion capture and pixel-by-pixel retouching to create a realistic virtual actress.
By Alan Boyle Science editor
msnbc.com

The animators behind Gollum, the little green man of “The Lord of the Rings,” have taken one more giant leap for computer-generated movie stars. But scientists say it will be many more years — perhaps even decades — before computers can create convincing actors and actresses from whole cloth.

Gollum's performance in the latest “Rings” release, “The Two Towers,” takes advantage of the best graphics that money can buy. He’s already the subject of Oscar buzz, and the Los Angeles Times calls him “perhaps the most believable computer-generated character in a movie.”

But Gollum’s soul by no means sprang from a computer: The foundation was laid during countless hours of filming with real-life actor Andy Serkis, who provided Gollum’s expressions and gestures as well as his voice. Each scene with Gollum was shot three times: once with Serkis on location, once without him and once in a studio with Serkis wearing a motion-capture suit.

Using sophisticated computer graphics programs, legions of digital artists then turned Serkis’ image into that of the pointy-eared, saucer-eyed, scene-stealing creature — and digitally blended him into the film.

Such graphics have to be painstakingly fine-tuned by human hands, said University of Geneva Professor Nadia Magnenat-Thalmann, who founded MIRALab to do research into the science of computer animation.

“For me, I’m not so impressed if I see a fantastic sequence, because I know it’s been retouched pixel by pixel,” she said.

Magnenat-Thalmann, who holds degrees in psychology, biology and quantum physics, has been working for 20 years toward a vision of creating virtual humans from scratch.

“What I would wish is to be working in 3-D with 3-D glasses, and then I tell my (virtual) actors, ‘Do this, go there, take a glass, smile.’ This is where we would like to go,” she said. “It is going much more slowly than what I expected 20 years ago.”

Sketching the background
The interaction between live-action and animated characters goes much farther back, of course, to such milestones as the 1945 musical “Anchors Aweigh.” That film involved hand-painting 10,000 frames to make it look like Gene Kelly was dancing with Jerry the Mouse.

Image: Serkis and Gollum
To film the part of Gollum in "The Two Towers," Andy Serkis wore a close-fitting motion-capture suit outfitted with sensors. Computers used sensors to track the coordinates of Serkis' movements, and animators mapped them onto a digital model of Gollum, superimposed on this composite image.
Then there’s 1988’s “Who Framed Roger Rabbit” — in which toons fraternized with a film-noir detective played by Bob Hoskins (and in which the faux femme fatale Jessica Rabbit uttered the immortal words “I’m not bad ... I’m just drawn that way”).

Herds of virtual dinosaurs were whipped up for the “Jurassic Park” movies of the 1990s, and this year, the movie “Simone” was founded on the premise that a virtual actress could look realistic enough to fool even her co-stars. (Despite all the hype, Simone was actually played by live-action actress Rachel Roberts, with an assist from special effects.)

You could argue that we’re already well into the era of digital characters, since they routinely serve as extras and supporting characters in movies ranging from “Titanic” to “Star Wars” and “Harry Potter.”

“It’s a question of degree,” said Rick Parent, an Ohio State University computer science professor who literally wrote the book on computer-generated animation. “We’ve had synthetic characters, synthetic stunt doubles and synthetic crowd scenes.”

“Final Fantasy: The Spirits Within,” based on the wildly successful video game franchise, ushered in a new era in computer animation. Although the film used traditional motion-capture techniques to track how the characters should move, all that information was turned into a wall-to-wall digital movie. The film demonstrated the power of computer-generated graphics, as well as the limitations.

“It cost $20 million just to simulate the hair,” Magnenat-Thalmann said. The film, which opened to poor reviews, reportedly lost in the neighborhood of $80 million.

Today and tomorrow
Today, computer animation is an expensive, time-consuming proposition: Magnenat-Thalmann estimated that it would take about 30 seconds of processor time to do the calculations for one digital frame, featuring a virtual character with her “physical-based hair” blowing in the wind. That doesn’t sound like all that much, until you consider that you need 25 frames per second of finished film, and that doing the calculations is just the beginning of the process.

Computer animators use some tricks of the trade to streamline the job. “In most figure animation projects, people use tight-fitting or rigid clothing so they don’t have to worry about the folds,” Parent explained.

Skin folds and complex joints are also particularly difficult to simulate, he said. “That’s why you don’t see bare shoulders, because of the skin folds. And that’s also why everybody’s pretty much 20 or 25 years old,” Parent said.

But as computer horsepower increases, researchers expect that the efficiency and quality of digital animation will rival live-action moviemaking.

The key challenges for the future include creating fully digital models of the human figure and face, so that the software itself can figure out the mechanics of a character’s movement rather than following the lead of motion-capture data. Researchers are also working on giving virtual actors more autonomy, so that they “know” how to walk or open a door without having to be programmed step by step. And they’re studying the details of how clothes move — what Parent calls “cloth dynamics.”

Parent said progress in virtual characters would likely be evolutionary rather than revolutionary. Robert Zemeckis, who directed “Who Framed Roger Rabbit,” is at work right now on a movie version of “The Polar Express,” in which every scene would be motion-captured, digitized and populated with virtual renderings based on live-action characters.

A future movie could theoretically turn back time on an actor in a flashback scene, Parent said. “You capture the motion of Jack Nicholson and map his motion onto the synthetic animation of a younger Jack Nicholson,” he said.

But could realistic virtual stars someday be built from the ground up? “I think in terms of a figure that has really convincing animation — skin folds and hair and clothes — we’re still 10 years away from something fully extensible like that,” Parent said.

Magnenat-Thalmann said it’s difficult to sell movie studios on the idea of using digital-born characters rather than motion-capture hybrids like Gollum.

“As long as motion-capture designers give better results, why use these things that are not so perfect?” she said.

She thinks it could take 50 years for her vision of virtual stars to become a reality.

“Maybe I will have died,” she said, “but we will have this one day, surely.”

© 2013 msnbc.com Reprints

Discuss:

Discussion comments

,

Most active discussions

  1. votes comments
  2. votes comments
  3. votes comments
  4. votes comments