IE 11 is not supported. For an optimal experience visit our site on another browser.

Moving closer to a 'Matrix'-style virtual world

Some sophisticated new projects are showing just how far we’ve come toward creating an “I can’t believe it’s not real” virtual world.

What if a computer could make you a picture-perfect glass of milk, let you feel the tension as it pulled an ant’s leg from another room, and chat you up with the charisma of Oprah Winfrey? No one machine can do all three — yet. But some sophisticated new projects are showing just how far we’ve come toward creating an “I can’t believe it’s not real” virtual world.

Last month, Brookhaven National Laboratory computer scientist Michael McGuigan told New Scientist magazine he believed a “Matrix”-style virtual world, in which one cannot always distinguish between what’s real and what’s not, could be up and running in just a few years. His optimism derived in part from the impressive ramp-up in processing speed he obtained with the lab’s BlueGene/L supercomputer while running a conventional ray-tracing software program that mimics the effect of natural light.

The result? An eye-fooling virtual beam.

Henrik Wann Jensen, an associate professor of computer science and engineering at the University of California at San Diego, is among those leading the charge toward more powerful algorithms that yield, say, a convincing fog-shrouded lighthouse or a frosty glass of 2 percent milk. Best of all, the convergence of speed and power means those virtual stand-ins don’t necessarily require a room-sized supercomputer to produce them.

“Now is a pretty exciting time in graphics,” Jensen says. “We’ve reached a level now where we can make very realistic images: five to 10 hours to make images more or less perfect, where people say, ‘Wow, that’s a photograph!’ ”

Maintaining the same illusion for real-time animation isn’t as far along, largely due to its enormous appetite for computing power. But that limitation is quickly falling by the wayside, Jensen says, with the aid of muscular new graphics processors like Intel’s Larrabee chip and Nvidia’s CUDA technology.

Pushing the envelope
Jensen is attacking the problem of limited power from the other end by cutting the computational costs of graphics-producing algorithms known as ray tracing and photon mapping. Ray tracing follows a beam of light through a virtual environment, mimicking how the beam would interact with its surroundings. Photon mapping is essentially the reverse, and together, the two algorithms fit into what Jensen calls global illumination, a framework for simulating how light bends a spoon in a glass of water, cuts through the swirl of smoke around a spotlight stage, or penetrates a thick fog in the form of a lighthouse beam.

“In many ways, we’re just taking the physics of nature and trying to simulate that,” he says, but in a streamlined way that uses far less power. Instead of counting all the photons associated with a light source, Jensen’s algorithms start with a question: If you place a set of eyes at a specific spot in a scene, what would they see? Previous methods sampled photons here and there across a light source, but Jensen’s technique maps the relevant photons along the light’s entire pathway, letting a graphics interface follow the light around a scene and determine how much will be absorbed, reflected or scattered by other objects.

For the first “Shrek” movie, filmmakers told Jensen a scene with the Gingerbread Man and a glass of milk was one of the most difficult to produce. “They didn’t think of milk as a medium like fog,” Jensen says, and consequently used the wrong technology to simulate how light interacted with it.

“Shrek 2” incorporated some of Jensen’s early work to improve upon its milk simulation. Last August, at the annual SIGGRAPH computer graphics conference in San Diego, Jensen and colleagues went even further by showing how to mix water, protein, fat globules and B2 vitamins in different concentrations to yield a realistic glass of skim milk, whole milk, or even cream. (He is often called “the milk guy” at these conferences).

The movie and gaming industries are both pushing the envelope, Jensen says, and a virtual world that can quickly change yet look very realistic at the same time is on the horizon. “We need just a little bit more time and slightly better computer power to get it to the level where you’d think, ‘Wow! This is absolutely amazing!’”

Even so, creating a realistic human face that can talk in real time remains a big obstacle, and Jensen is working on improving the look of virtual skin. “We are, as humans, just trained to look at faces,” he says. “In a virtual environment, where you use a computer-generated figure that doesn’t look quite right, it’s a distraction from the action of the movie.”

Virtual Oprah
Another prerequisite for a convincing virtual world is the ability to carry on a conversation with a computer-generated counterpart that easily reacts to non-verbal cues, such as tone of voice and facial expression, says Roddy Cowie, a professor of psychology at Queen’s University Belfast in Northern Ireland. Most research has focused on basic comprehension, but Cowie argues that non-linguistic cues are critical for supporting comprehensible human-computer dialogue, “like a trellis that can support roses and ivy and all the rest.”

Computers have a rather shallow ability to infer, and understanding the signals indicating interest, concern and other emotions further complicate the problem. “You need to know whether someone is interested in the conversation, or has completely switched off,” Cowie says.

Enter Virtual Oprah.

“We noticed that chat show hosts like Oprah seemed to be very good at conversation that bypassed most of the linguistic complexities. They had a repertoire of phrases that kept people talking, and even raised the intensity of their conversation, without much reference to linguistic details,” Cowie says. The same holds true for people trying to communicate with a foreign speaker: “You can keep conversation going for a long time if you can make a few noises they understand, in the right tone of voice.”

A system known as Sensitive Artificial Listener, or SAL, uses that observation as a trellis. Equipped with stock phrases and ‘soft skills’ such as nods, smiles, tones of voice and sensitivity to nonverbal cues, the system modeled on Oprah’s linguistic behavior offers a framework aimed at keeping a conversation going for more than a half-hour.

Teaching a computer what to say and how to say it could prove a boon for teaching applications that recognize when a pupil is having difficulty. Ditto for computer-based systems marketed as companions or lifestyle coaches. “If a machine is going to share large, sensitive parts of a person’s life, it had better have some sensitivity,” says Cowie, part of a European consortium hoping to do just that through its SEMAINE project (Sustained Emotionally coloured Machine-human Interaction using Nonverbal Expression).

Eventually, the project’s SAL brainchild could lead to an empathetic avatar nodding sympathetically and saying all the right things while you hash out your latest sob story. But what if you really need a virtual hug?

Simulated touch
Ralph Hollis, a research professor in the Robotics Institute at Carnegie Mellon University in Pittsburgh, isn’t about to promise that breakthrough just yet. His laboratory team, however, has harnessed a technology called magnetic levitation to create one of the most sensitive haptic, or touch-based, interfaces in the world.

The magnetic levitation device, resembling a shallow bowl welded into a half-sphere, uses magnetic fields to hover above its base. A handle within the bowl, or flotor, can be freely moved to control the position and orientation of a virtual object on a computer display. As that virtual object encounters obstacles, signals flow to six electrical coils embedded within the flotor, resulting in ultra-fine haptic feedback.  

How sensitive? For one demonstration, researchers affixed an ant to a microscope slide and used a sewing needle connected to a magnetic levitation device to tug on one of the ant’s legs. That subtle tension was readily felt by someone using another haptic device in a separate room.

The high resolution, Hollis says, comes from avoiding the mechanics of robotic arms used to simulate touch. Most arms move up and down, forward and backward and left and right — enough to control a point in a three-dimensional space. But controlling both the position and orientation of an object requires all six degrees of freedom, including roll, pitch and yaw, quickly adding expense and complexity to mechanical structures.  “The fidelity of the interaction suffers from having all these motors and links and cables and gears and especially in the case of having six degrees of freedom,” he says. “So that’s a quandary.”

The new device sidesteps the problem with magnetic fields and only one moving part, letting users experience the same touch sensation they’d get from running a finger along a rough tabletop. The new system also boasts better simulations of hard contact, such as a three-dimensional virtual object hitting an appropriately hard virtual wall. “For most systems, that impact feels mushy, it feels like you’re hitting a block of foam,” Hollis says.

The downside to all this refinement is a limited range of motion, analogous to being able to move your computer’s mouse only a small distance across the screen. Hollis and his colleagues have been able to magnify that motion in the virtual world by a factor of 30, but simulating entire arm movements would be impractical (sorry, no hugs). Instead, Hollis says the range of motion could be enormously useful for finger-focused activities like microsurgery — or dentistry.

“We think our device is well-matched to the motion of range of doing virtual dentistry,” he says. “We have the ability to simulate contact with very hard surfaces, like tooth enamel, as well as with softer gum tissue.” A spin-off company, called Butterfly Haptics, will begin producing the haptic devices this summer, although Hollis says specific applications, whether for dentistry or brain surgery training, will be up to individual buyers to develop.

Nevertheless, the new agility means that in just a few years, an avatar could convincingly tell a depressed dentist in training, “I feel your pain,” and then wince as the dental scaler hits her virtual gums yet again.