Researchers identify components of speech recognition pathway in humans

June 24, 2011

Neuroscientists at Georgetown University Medical Center (GUMC) have defined, for the first time, three different processing stages that a human brain needs to identify sounds such as speech — and discovered that they are the same as those identified in non-human primates.

With the help of 13 human volunteers who spent time in a functional MRI machine, the researchers  showed that both human and non-human primates process speech along two parallel pathways, each of which run from lower to higher functioning neural regions. These pathways are dubbed the “what” and “where” streams and are roughly analogous to how the brain processes sight, but in different regions. The “where” stream localizes sound and the “what” pathway identifies the sound.

The researchers identified the three distinct areas in the “what” pathway in humans that had been seen in non-human primates. Only two had been recognized before in previous human studies. The first, and most primary, is the “core” that analyzes tones at the basic level of simple frequencies. The second area, the “belt,” wraps around the core, and integrates several tones, “like buzz sounds,” that lie close to each other, the researchers said. The third area, the “parabelt,” responds to speech sounds such as vowels, which are essentially complex bursts of multiple frequencies.

The discovery could offer important insights into what can go wrong when someone has difficulty speaking, which involves hearing voice-generated sounds, or understanding the speech of others, the researchers said.

Ref.: M. Chevillet, M. Riesenhuber, J. P. Rauschecker, Functional Correlates of the Anterolateral Processing Hierarchy in Human Auditory Cortex, Journal of Neuroscience, 2011; 31 (25): 9345 [DOI: 10.1523/JNEUROSCI.1448-11.2011]