Neurocomputation of Music, Faces and Belly Laughs
Pattern-classifiers interpret fMRI data
Peek inside the skull of a couch potato watching reruns on TV and you’ll see non-stop patterns of blood flow throughout the brain. If you learn to pick out which activity patterns match up with, say, a good belly laugh, then you might be on your way to reading the viewer’s internal experiences. Recently, experts from a variety of fields competed to glean subjective perceptions like humor from functional MRIs of TV viewers. They were surprisingly successful.
“Our goal is to know how the brain represents information,” says Walter Schneider, PhD, professor of psychology at the University of Pittsburgh and principal investigator of the Experience Based Cognition group, which sponsored the competition. “In theory, if we can understand the information in the activity of somebody’s brain, then we can understand what they perceived.”
In the competition, 40 teams of researchers from nine countries developed pattern-classification methods for interpreting fMRI data. They used training data derived from three volunteers watching scenes from two episodes of “Home Improvement” 14 times each—once in an MRI scanner and 13 times while reporting their perceptions. The teams tested their methods on fMRI images of the volunteers watching a third set of scenes from the TV show. The goal was to decipher each individual’s brain activation patterns and then describe his or her TV-watching experience in a way that would closely match the volunteer’s real-time impressions. Winners were announced in June at the Organization for Human Brain Mapping meeting in Florence, Italy.
Overall, predictions were remarkably accurate, Schneider says. The easiest patterns to pick out in the fMRI data were those that occurred when volunteers heard background music. The top group’s prediction for music perception was “almost right on top” of the volunteers’ own ratings, he says, with an average correlation of 0.84. Patterns for faces, language, and environmental sounds were also generally easy to detect, and some groups excelled at identifying when the volunteers recognized specific actors in the scenes. On the other hand, nearly all groups stumbled at figuring out when food was visible on the screen. Perhaps the mere sight of food doesn’t evoke strong signals in the brain, Schneider says, “although one subject did skip lunch, and we got better responses for him.”
The top group, led by Sriharsha Veeramachaneni, PhD, a researcher at the Center for Scientific and Technological Research at the Istituto Trentino Di Cultura in Italy (ITC-IRST) with a background in computer engineering, built a model with recurrent neural networks. Despite knowing “practically nothing” about analyzing brain images, Veeramachaneni says, the researchers soon realized they could treat these signals as generic data for purposes of this competition.
The second-place team, led by Denis Chigirev, a physics doctoral student at Princeton University, concentrated on extensive preprocessing of the data across space and time—an approach that reflects the group’s perspective. “Physicists pay careful attention to what is signal and what is noise,” Chigirev says. “We wanted to let the signal tell us what to do.”
Alexis Battle, a computer science doctoral student at Stanford University, led the third group which explicitly modeled correlations in the dataset. “We thought about the relationships in the data that we could exploit,” Battle says. “We chose to encode the relationships in a formal probabilistic framework.”
Schneider is already “playing matchmaker” to help facilitate new multidisciplinary collaborations next year. According to Daphne Koller, PhD, professor of computer science at Stanford University and principal investigator for Battle’s team, “The fMRI field is at the point that genomics was 10 years ago. There’s a tremendous opportunity now for us to integrate computational methods with the understanding that’s being developed by the brain scientists.”