Human voices play a fundamental role in social communication, and areas of the adult "social brain" show specialization for processing voices and their emotional content (superior temporal sulcus, inferior prefrontal cortex, premotor cortical regions, amygdala, and insula). However, it is unclear when this specialization develops. Functional magnetic resonance (fMRI) studies suggest that the infant temporal cortex does not differentiate speech from music or backward speech, but a prior study with functional near-infrared spectroscopy revealed preferential activation for human voices in 7-month-olds, in a more posterior location of the temporal cortex than in adults. However, the brain networks involved in processing nonspeech human vocalizations in early development are still unknown. To address this issue, in the present fMRI study, 3- to 7-month-olds were presented with adult nonspeech vocalizations (emotionally neutral, emotionally positive, and emotionally negative) and nonvocal environmental sounds. Infants displayed significant differential activation in the anterior portion of the temporal cortex, similarly to adults. Moreover, sad vocalizations modulated the activity of brain regions involved in processing affective stimuli such as the orbitofrontal cortex and insula. These results suggest remarkably early functional specialization for processing human voice and negative emotions.
Copyright © 2011 Elsevier Ltd. All rights reserved.