Scientists able to zoom in and out as the brain processes sound
August 18, 2014
Researchers at Johns Hopkins have mapped a new technique for watching auditory processing in the brains of mice as brain cells lit up when the mice listened to tones and one another’s calls.
The results, which represent a step toward better understanding how our own brains process language, appear online July 31 in the journal Neuron.
In the past, researchers often studied sound processing in various animal brains by poking tiny electrodes into the auditory cortex, the part of the brain that processes sound.
They then played tones and observed the response of nearby neurons, laboriously repeating the process over a gridlike pattern to figure out where the active neurons were.
Confusing zoomed-in views
More recently, a technique called two-photon microscopy has allowed researchers to focus in on minute slices of the live mouse brain, observing activity in unprecedented detail. This newer approach has suggested that the precise arrangement of bands might be an illusion.
However, “you could lose your way within the zoomed-in views afforded by two-photon microscopy and not know exactly where you are in the brain,” says David Yue, M.D., Ph.D., a professor of biomedical engineering and neuroscience at the Johns Hopkins University School of Medicine. Yue led the study along with Eric Young, Ph.D., also a professor of biomedical engineering and a researcher in Johns Hopkins’ Institute for Basic Biomedical Sciences.
To get the bigger picture, John Issa, a graduate student in Yue’s lab, used a mouse genetically engineered to produce a molecule that glows green in the presence of calcium.
Since calcium levels rise in neurons when they become active, neurons in the mouse’s auditory cortex glow green when activated by various sounds.
Issa used a two-photon microscope to peer into the brains of live mice as they listened to sounds and saw which neurons lit up in response, piecing together a global map of a given mouse’s auditory cortex.
“With these mice, we were able to both monitor the activity of individual populations of neurons and zoom out to see how those populations fit into a larger organizational picture,” he says.
Improving cochlear implants, other hearing aids
With these advances, Issa and the rest of the research team were able see the tidy tone bands identified in earlier electrode studies. In addition, the new imaging platform quickly revealed more sophisticated properties of the auditory cortex, particularly as mice listened to the chirps they use to communicate with each other.
“Understanding how sound representation is organized in the brain is ultimately very important for better treating hearing deficits,” Yue says. “We hope that mouse experiments like this can provide a basis for figuring out how our own brains process language and, eventually, how to help people with cochlear implants and similar interventions hear better.”
Yue notes that the same approach could also be used to understand other parts of the brain as they react to outside stimuli, such as the visual cortex and the parts of the brain responsible for processing stimuli from limbs.
This work was supported by the Robert J. Kleberg, Jr. and Helen C. Kleberg Foundation, the National Institute for Neurological Disorders and Stroke, the National Institutes of Health’s Medical Scientist Training Program, and the National Institute on Deafness and Other Communication Disorders,
Johns Hopkins Medicine | Mice were genetically engineered to produce a molecule that glows green in the presence of calcium. Since calcium levels rise in neurons when they become active, neurons in the mouse’s auditory cortex glow green when activated by various sounds.
Abstract of Neuron paper
- High-sensitivity mode of transcranial imaging of cortex in unanesthetized mice
- Spectral organization of auditory cortex under widefield imaging is highly regular
- Neighboring neurons in AI are appreciably cotuned
- Increased spectral integration is observed in neurons of AII
Spatial patterns of functional organization, resolved by microelectrode mapping, comprise a core principle of sensory cortices. In auditory cortex, however, recent two-photon Ca2+ imaging challenges this precept, as the traditional tonotopic arrangement appears weakly organized at the level of individual neurons. To resolve this fundamental ambiguity about the organization of auditory cortex, we developed multiscale optical Ca2+ imaging of unanesthetized GCaMP transgenic mice. Single-neuron activity monitored by two-photon imaging was precisely registered to large-scale cortical maps provided by transcranial widefield imaging. Neurons in the primary field responded well to tones; neighboring neurons were appreciably cotuned, and preferred frequencies adhered tightly to a tonotopic axis. By contrast, nearby secondary-field neurons exhibited heterogeneous tuning. The multiscale imaging approach also readily localized vocalization regions and neurons. Altogether, these findings cohere electrode and two-photon perspectives, resolve new features of auditory cortex, and offer a promising approach generalizable to any cortical area.