IE 11 is not supported. For an optimal experience visit our site on another browser.

Now showing: Movie clips from your mind

What if scientists could peer inside your brain and then reconstruct what you were thinking, playing the images back like a video?
Study subjects were shown movie trailers, left, and neuroscientists were able to reconstruct them, right, using brain activity data and a library of random YouTube clips.
Study subjects were shown movie trailers, left, and neuroscientists were able to reconstruct them, right, using brain activity data and a library of random YouTube clips.Jack Gallant
/ Source: Discovery Channel

What if scientists could peer inside your brain and then reconstruct what you were thinking, playing the images back like a video?

Science and technology are not even remotely at that point yet, but a new study from the University of California Berkeley marks a significant, if blurry, step in that direction.

"Using our particular modeling framework, we can actually infer very fast dynamic events that happen in the brain," said Jack Gallant, a neuroscience professor at the University of California Berkeley who worked on the study, which was published today in the journal Current Biology.

To try and read the brain, the scientists had people watch compilations of clips from Hollywood movie trailers while staying still inside a functional Magnetic Resonance Imaging machine, better known as fMRI. The machine took scans as the subjects watched the compilation 10 times, totaling around two hours.

Part of the challenge is that while an individual neuron can fire hundreds of times a second, fMRI machines can provide only snapshots of those neural events. To translate the snapshots into dynamic information, the scientists created a model or "dictionary" using a special reconstruction algorithm.

For the reconstructed brain videos, the team drew from a separate library of 18 million seconds of YouTube video clips selected at random. If the brain activity measurements are good and the model is accurate, their process should find the clip from the library most similar to the video actually viewed.

The result was a set of blurry, ghostly continuous videos approximating what the subjects were watching.

"You're reconstructing a movie that they saw using other movies that they didn't actually see," Gallant said. This counterintuitive approach is key for proving the decoder works.

In a previous study, Gallant and his colleagues had shown still black-and-white photos to subjects while they were in an fMRI machine. The latest research was propelled by a computational method developed by Shinji Nishimoto, a postdoctoral researcher in Gallant's lab and lead author on the Current Biology article. His method allowed the neuroscientists to recover dynamic brain activity from the fMRI scans.

"This provides you a new can opener that allows you to look at a lot of problems in human cognitive neuroscience that weren't really accessible before," Gallant said of their model. Next, he says the UC Berkeley team would like to create a decoder for semantic information from a higher level visual area of the brain.

If they're successful, it would make the video reconstructions far more accurate.

Gallant wants to be clear about his lab's research goal. "We're trying to understand how the brain works," he said. "We're not trying to build a brain-decoding device."

Even if they wanted to build one, it would require an imaging revolution in technology that could measure the brain better than fMRI. At this point there's nothing like that on the horizon, he said.

Michael Mozer is a professor in the University of Colorado, Boulder's Department of Computer Science and Institute of Cognitive Science. He specializes in building models of the brain to help psychologists study memory formation, learning optimization, and how people forget information.

In the past he's seen neuroscientists decode basic things from the brain such as differentiating between a face and a house. Mozer said he's impressed by the UC Berkeley mathematical model's capacity to read dynamic events despite the coarse representation of neuron activity from the brain scanner.

"There's an ability to pull out a lot more information from fMRI activity patterns than one might have thought," he said of the study. "It's going to impress people like me who are just skeptical that there was that much information you could read out of the brain."