IE 11 is not supported. For an optimal experience visit our site on another browser.

A big move for motion capture

At left, a subject wears an array of 20 strategically placed cameras, facing outward to monitor apparent motion in the environment. At right, the data from all those cameras can be interpreted to produce an animated figure in virtual space.
At left, a subject wears an array of 20 strategically placed cameras, facing outward to monitor apparent motion in the environment. At right, the data from all those cameras can be interpreted to produce an animated figure in virtual space.CMU / DRP

Motion-capture animation is all the rage in moviemaking: Without it, there's no Gollum in "The Lord of the Rings," no aliens in "Avatar," no intelligent chimps in "Rise of the Planet of the Apes." But it's an expensive proposition: You need to place special dots all over the actors whose motion you want to capture, then have them do their thing in front of precisely calibrated cameras hooked up to a sophisticated computer system, inside a closed stage with controlled lighting.

Now all that could change, thanks to a new system that relies on cameras looking out from the actor's body, rather than cameras looking in at the actor.

"This could be the future of motion capture," Takaaki Shiratori, a postdoctoral associate at Disney Research, Pittsburgh, says in a news release about the technique. "I think anyone will be able to do motion capture in the not-so-distant future."

Shiratori presented a paper about the inside-out approach to motion capture, known as "structure from motion" or SfM, today at the ACM SIGGRAPH 2011 conference in Vancouver, Canada. The method has been the subject of research for 20 years at Carnegie Mellon University and Disney's research facility in Pittsburgh.

In traditional motion capture, cameras focus on dots that are placed at strategic locations on a body suit worn by the actors. Computer software renders an animated image — the chimpanzee or the alien, for example — so that its movements conform to the positions of the dots. That animation can be substituted for the actor's image in a computer-rendered composite.

"In 'Avatar,' motion capture was used to animate characters riding on direhorses and flying on the back of mountain banshees," Shiratori and his colleagues write in the paper. "To capture realistic motions for such scenes, the actors rode horses and robotic mock-ups in an expansive motion capture studio requiring a large number of cameras."

In the SfM version of motion capture, 20 lightweight cameras are mounted on the limbs and the trunk of each actor, looking out into the environment. As the actor moves, the video from each camera is compared with reference images, and translated into the movements of the animated figure in a virtual 3-D environment. No studio needed.

The good news is that the technique can be used to capture a sequence of movements in an outdoor setting, with no boundaries on the range of movement. This video shows how the software builds a virtual space, sort of like the data-point cloud created by the Kinect motion-detection game controller, and tracks an actor as he moves through the space.

"Our approach will continue to benefit from consumer trends that are driving cameras to become cheaper, smaller, faster and more pervasive," the researchers write.

The bad news is that rendering the imagery currently calls for a huge amount of computational firepower. The researchers say it takes an entire day to process just one minute of motion-capture data, and the final results aren't quite as good as what's achievable through traditional methods. But as Gollum said in "The Lord of the Rings" movie, "Patience! Patience, my love." The researchers hope that precioussss innovations will soon be within their grasp.

"Future work will include efforts to find computational shortcuts, such as performing many of the steps simultaneously through parallel processing," the team reports.

More on movie tech: 

In addition to Shiratori, collaborators on "Motion Capture From Body-Mounted Cameras" include Hyun Soo Park, Yaser Sheikh and Jessica K. Hodgins of Carnegie Mellon University and Leonid Sigal of Disney Research, Pittsburgh. Hodgins is a DRP director as well as a CMU professor.

Connect with the Cosmic Log community by "liking" the log's Facebook page or following @b0yle on Twitter. You can also add me to your Google+ circle, and check out "The Case for Pluto," my book about the controversial dwarf planet and the search for new worlds.