How to make movies of what the brain sees

September 23, 2011 by Amara D. Angelica

Brainstorm: recording and playing back actual experiences of people (credit: MGM)

Remember the movie Brainstorm? Imagine watching someone’s dream, or tapping directly into the mind of a coma patient. University of California, Berkeley scientists claim they have finally achieved this classic futuristic movie “mind reading” trope.

They’re using functional Magnetic Resonance Imaging (fMRI) and computational models to decode and reconstruct people’s dynamic visual experiences.

So far, the technology can only reconstruct movie clips you’ve already viewed. But they claim that the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories. “This is a major leap toward reconstructing internal imagery,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study. “We are opening a window into the movies in our minds.”

Communicating with comatose patients

Eventually, the researchers say, the technology could allow us to see into the minds of people who cannot communicate verbally, such as stroke victims, coma patients, and people with neurodegenerative diseases. It may also lay the groundwork for a brain-machine interface, so people with cerebral palsy or paralysis, for example, can guide computers with their minds.

How it works

Set in a world with memory implants, Robin Williams plays a cutter, someone with the power of final edit over people’s recorded histories (credit: Lions Gate Entertainment)

Gallant and fellow researchers previously recorded brain activity in the visual cortex while a subject viewed black-and-white photographs. They then built a computational model that enabled them to accurately predict which picture the subject was looking at.

In their latest experiment, they’ve solved the hard problem: actually decoding brain signals generated by moving pictures, the researchers report.

“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. “In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.”

The Cell: A therapist enters the mind of a serial killer (credit: New Line Cinema)

Nishimoto and two other research team members served as subjects for the experiment, because the procedure requires volunteers to remain still, inside the MRI scanner, for hours at a time.

They watched two separate sets of Hollywood movie trailers, while fMRI was used to measure blood flow through the visual cortex, the part of the brain that processes visual information.

On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or voxels. “We built a model for each voxel that describes how shape and motion information in the movie is mapped into brain activity,” Nishimoto said.

Strange Days: An ex-cop who now deals with data-discs containing recorded memories and emotions receives a disc that contains the memories of a murderer (credit: Lightstorm Entertainment)

Reconstructing brain movies

The brain activity, recorded while subjects viewed the first set of clips, was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity. Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm.

This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject.

Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.

Reconstructing movies using brain scans has been challenging because the blood flow signals measured using fMRI change much more slowly than the neural signals that encode dynamic information in movies, the researchers said. (Apparently. that’s why most previous attempts to decode brain activity have been limited static images.)

“We addressed this problem by developing a two-stage model that separately describes the underlying neural population and blood flow signals,” Nishimoto said.

Ultimately, Nishimoto said, “We need to know how the brain works in naturalistic conditions. For that, we need to first understand how the brain works while we are watching movies.”

Ref.: Shinji Nishimoto, et al., Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies, Current Biology, 2011; [DOI: 10.1016/j.cub.2011.08.031]

This video is organized as follows: the movie that each subject viewed while in the magnet is shown at upper left. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject’s brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. (In brief, the algorithm processes each of the 18 million clips through the brain model, and identifies the clips that would have produced brain activity as similar to the measured activity as possible. The clips used to fit the model, the clips used to test the model and the clips used to reconstruct the stimulus were entirely separate.) The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. These reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject.