Now Showing: Movie Clips From Your Mind

/ Source: Discovery Channel

What if scientists could peer inside your brain and then reconstruct what you were thinking, playing the images back like a video?

Science and technology are not even remotely at that point yet, but a new study from the University of California Berkeley marks a significant, if blurry, step in that direction.

"Using our particular modeling framework, we can actually infer very fast dynamic events that happen in the brain," said Jack Gallant, a neuroscience professor at the University of California Berkeley who worked on the study, which was published today in the journal Current Biology.

NEWS: Mind-Reading Tech Predicts Terrorist's Intentions

To try and read the brain, the scientists showed people compilations of YouTube clips from Hollywood movie trailers while they were inside a functional Magnetic Resonance Imaging machine, better known as fMRI. The machine took scans as the subjects watched the compilation 10 times, totaling around two hours.


The scientists created a new computer model to decode the brain imaging data they collected, including general movement, shapes, and colors. They were able to translate the data into actual videos by matching the brain scans with the closest moving images from the giant database of random Internet video clips.

The result was a set of blurry, ghostly continuous videos approximating what the subjects were watching.

The model struggled to reconstruct videos showing objects and abstractions because even though it was random, the YouTube library was skewed toward videos of people. Gallant says a much larger library of random clips than the 18 million seconds they used would likely yield clearer results, but for the foreseeable future the blur is here to stay.

"You're reconstructing a movie that they saw using other movies that they didn't actually see," Gallant said. This counterintuitive approach is key for proving the decoder works.

In a previous study, Gallant and his colleagues had shown still black-and-white photos to subjects while they were in an fMRI machine. The latest research was propelled by a computational method developed by Shinji Nishimoto, a postdoctoral researcher in Gallant's lab and lead author on the Current Biology article. His method allowed the neuroscientists to recover dynamic brain activity from the fMRI scans.

"This provides you a new can opener that allows you to look at a lot of problems in human cognitive neuroscience that weren't really accessible before," Gallant said of their model. Next, he says the UC Berkeley team would like to create a decoder for semantic information from a higher level visual area of the brain.

If they're successful, it would make the video reconstructions far more accurate.

Gallant wants to be clear about his lab's research goal. "We're trying to understand how the brain works," he said. "We're not trying to build a brain-decoding device."

NEWS: Mind-Reading Devices to Help the Speechless Speak

Even if they wanted to build one, it would require an imaging revolution in technology that could measure the brain better than fMRI. At this point there's nothing like that on the horizon, he said.

Michael Mozer is a professor in the University of Colorado, Boulder's Department of Computer Science and Institute of Cognitive Science. He specializes in building models of the brain to help psychologists study memory formation, learning optimization, and how people forget information.

In the past he's seen neuroscientists decode basic things from the brain such as differentiating between a face and a house. Mozer said he's impressed by the UC Berkeley mathematical model's capacity to read dynamic events despite the coarse representation of neuron activity from the brain scanner.

"There's an ability to pull out a lot more information from fMRI activity patterns than one might have thought," he said of the study. "It's going to impress people like me who are just skeptical that there was that much information you could read out of the brain."