Image: Bryn Nelson
By Columnist
msnbc.com
updated 11/10/2008 9:02:55 AM ET 2008-11-10T14:02:55

One software program can merge multiple camera shots to eliminate that half-dazed look that always seems to afflict one person in every group photo. Another can combine a picture taken with and without a flash to pair your smiling face with a properly illuminated nighttime scene. Yet another program can pull out crystal-clear details by shifting the focus after you've snapped a picture. And who wouldn’t want a camera that can clarify an otherwise blurry image like a speeding bicyclist?

With a few exceptions, none of these options have yet made their way into widely available cameras. But the fast-moving field of computational photography and an open-source “Frankencamera” built by Marc Levoy and his graduate students at Stanford University are suggesting how shutterbugs in the near future may be able to swap in and out the features they want with little more trouble than snapping together LEGO parts.

“You could think of them as LEGO Mindstorms cameras,” said Levoy, referring to LEGO’s advanced building sets that have become a favorite tool for engineers and robotics experts. “With hardware and software and algorithms, there’s just no end to what we can do with cameras and computational photography. I think it’s going to be very exciting.”

The annual SIGGRAPH conferences (short for Special Interest Group on GRAPHics and Interactive Techniques) convened by the Association for Computing Machinery have been particularly fertile ground for the field. Among the innovations debuting at this year’s conference, Bill Freeman and colleagues at MIT’s Computer Science and Artificial Intelligence Lab introduced a technique called motion-invariant photography.

“Counterintuitively but elegantly, if you move the camera during the exposure in a particular way, you can make an image that is all blurred, but in a way that’s easy to de-blur,” Freeman said.

The method works well for clarifying objects that are moving horizontally (think racing bicyclists), and Freeman is working on extending the range to two dimensions.

The technique, he said, could be used in a software program like Adobe Photoshop or added to a camera as a peripheral.

“The beauty of it is that it’s a simple de-blur, so you could do it in the camera,” he said. And by doing something new with both the camera and the computation, “that opens up a whole new design space.”

Finding a sharper image
Another emerging feature, known as digital refocusing, allows a camera to snap a group shot of well-wishers lined up along a hallway, successively capturing each face down the line with its shifting focus.

Similarly, a refocused portrait of someone in front of a window could be fixed if, say, you captured the Venetian blinds in exquisite detail while your girlfriend’s face is a fuzzy blob.

With digital refocusing, developed by Levoy in collaboration with fellow Stanford computer scientist Pat Hanrahan and former graduate student Ren Ng, photographers first capture the necessary data with a micro-lens array built into the camera between the photo sensor and the main lens. Then after the fact, they can refocus the shot by changing which pixels are added together.

“Or you could imagine doing different computations that would put everything into focus,” Levoy said. “You can get a nice artistic effect, but sometimes you just want everyone sharp.” (Ng’s Refocus Imaging, Inc. has some sample photos.)

Another common theme in computational photography has been to combine the benefits of flash and no-flash photography to create a composite image in which both the foreground and background are properly lit.

A no-flash photograph may capture a night scene well, while hiding your foreground subject in shadows. A flash version lights up his face — and his reflection in the window, obscuring the interesting street scene behind him. But an intelligent combination of the two? Priceless.

“If you think about traditional photography, it’s taking the light in a scene and sampling it and converting it into an image that hopefully looks like what you saw when you were standing there,” said Aseem Agarwala, a senior research scientist at San Jose, Calif.-based Adobe Systems.

But what if the light was too low, or you moved the camera during the shot, or your subject had her eyes closed during the one-sixtieth of a second that your camera actually sampled the scene?

“In computational photography,” he said, “we’ve put a computer between the sampling process and the final output.” The result is an ever-growing range of features that can sample far more information and yield impressive images under challenging conditions.

A moment in time — or not
A commonly cited innovation, an interactive digital photomontage technique that Agarwala and colleagues at the University of Washington and Microsoft produced about four years ago, has since made its way into Photoshop Elements 6.0 as the Photomerge feature.

Say you’re at a party with your friends and family. At least one person is guaranteed to be making a strange face on the group photo. It’s especially likely with a camera that samples only one-sixtieth of a second of the celebration. Why not sample more widely and take the best part of each image — the basis of the photomontage technique?

“In the end, you’re depicting a moment in time that never actually existed, but in a sense it’s more real because it’s how you remembered it,” Agarwala said.

What constitutes reality could be vigorously debated, of course, but a properly rendered montage arguably makes a better keepsake.

The same software, Agarwala said, could theoretically be included on a camera itself. “The idea is to get beyond the limitation of photography, which is that it samples one moment in time.”

Similarly, a photographer could take a barrage of photos of the Grand Canyon or some other tourist attraction over a period of time, and then use a function that chooses the most probable color for each pixel to seamlessly erase the sporadic tourists ruining the view.

That concept has since been incorporated into the Photoshop Elements Scene Cleaner function, but could likewise be added to a camera in the future.

'Frankencameras' see the light
What isn’t happening yet, Stanford’s Levoy said, is enough research on techniques that require a modification of the cameras themselves or that change its internal programming.

Now, photographers can set the exposure and aperture and take the picture. “They can’t typically focus the camera, change the zoom, they can’t do anything in real time.”

If they could, what would such a camera look like?

Some camera manufacturers may start incorporating more features on their own. Casio’s EX-F1 digital camera, which can take a rapid-fire burst of photos at 60 frames per second, has been among the few to do so thus far by allowing users to combine multiple photos into a final image.

“It aligns and merges them together to take a sharp picture in a dark room that you normally couldn’t do without a flash,” Levoy said. Ditto for a panning image of a subject in motion, like an in-focus action shot of a speeding motorcyclist.

Alternatively, researchers may be able to find a better platform for all the bells and whistles. If cell phone cameras continue to improve optically, Levoy said, “they could begin to destroy the bottom of the camera market.”

His lab’s own research with an early prototype dubbed “Frankencamera” may help that trend along. The composite of camera parts and an over-sized viewfinder could help the research community find more flexible platforms able to incorporate the flood of new features being envisioned.

“Our goal is to try to create a platform for students in computational photography courses within two years,” Levoy said.

The platform would boast an open-source, Linux-based operating system to boost creativity and productivity. “The key idea here is that we want it programmable completely, right down to how it moves the focus and zoom motors.”

Changing digital photography
Agarwala said he’s also begun working on an open-source prototype of his own that is still little more than a camera linked to a laptop. Like Levoy, he agreed that a move toward such platforms could spur the creation of multiple cell phone-like applications.

As one example, Agarwala cited an add-on that could help with rephotography. The technique, commonly used for before-and-after historical photos, tries to replicate the exact conditions of an original photograph at the same site, only much later in time.

“It’s a really great way to document things like a glacier melting for global warming,” he said. “But it’s actually really hard to do. There are six degrees of freedom that you have to search across. In theory, an algorithm can do it much better.”

As an application on a camera, a rephotography aid could analyze the original photo and instruct the photographer how to shift the camera to best match the view.

The coming changes may not be limited to cameras. Levoy said a revolution is underway in scientific imaging as well. His lab is working with the technique of light field imaging, which captures the whole field of incoming light, including detailed directional information about that light.

The extra data could allow a viewer to refocus an image well after its picture has been taken through either a camera or a microscope, so that a magnified object’s appearance might be easily seen from different angles.

All told, the innovations may fundamentally change what it means to take a digital picture. With multiple ways of enhancing an image rapidly coming online, Freeman said, “the photographic record is no longer the final thing, it’s just data.”

© 2013 msnbc.com Reprints

Discuss:

Discussion comments

,

Most active discussions

  1. votes comments
  2. votes comments
  3. votes comments
  4. votes comments