Concept: Wernicke's area
Lateralized brain regions subserve functions such as language and visuospatial processing. It has been conjectured that individuals may be left-brain dominant or right-brain dominant based on personality and cognitive style, but neuroimaging data has not provided clear evidence whether such phenotypic differences in the strength of left-dominant or right-dominant networks exist. We evaluated whether strongly lateralized connections covaried within the same individuals. Data were analyzed from publicly available resting state scans for 1011 individuals between the ages of 7 and 29. For each subject, functional lateralization was measured for each pair of 7266 regions covering the gray matter at 5-mm resolution as a difference in correlation before and after inverting images across the midsagittal plane. The difference in gray matter density between homotopic coordinates was used as a regressor to reduce the effect of structural asymmetries on functional lateralization. Nine left- and 11 right-lateralized hubs were identified as peaks in the degree map from the graph of significantly lateralized connections. The left-lateralized hubs included regions from the default mode network (medial prefrontal cortex, posterior cingulate cortex, and temporoparietal junction) and language regions (e.g., Broca Area and Wernicke Area), whereas the right-lateralized hubs included regions from the attention control network (e.g., lateral intraparietal sulcus, anterior insula, area MT, and frontal eye fields). Left- and right-lateralized hubs formed two separable networks of mutually lateralized regions. Connections involving only left- or only right-lateralized hubs showed positive correlation across subjects, but only for connections sharing a node. Lateralization of brain connections appears to be a local rather than global property of brain networks, and our data are not consistent with a whole-brain phenotype of greater “left-brained” or greater “right-brained” network strength across individuals. Small increases in lateralization with age were seen, but no differences in gender were observed.
A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.
Age is one of the most salient aspects in faces and of fundamental cognitive and social relevance. Although face processing has been studied extensively, brain regions responsive to age have yet to be localized. Using evocative face morphs and fMRI, we segregate two areas extending beyond the previously established face-sensitive core network, centered on the inferior temporal sulci and angular gyri bilaterally, both of which process changes of facial age. By means of probabilistic tractography, we compare their patterns of functional activation and structural connectivity. The ventral portion of Wernicke’s understudied perpendicular association fasciculus is shown to interconnect the two areas, and activation within these clusters is related to the probability of fiber connectivity between them. In addition, post-hoc age-rating competence is found to be associated with high response magnitudes in the left angular gyrus. Our results provide the first evidence that facial age has a distinct representation pattern in the posterior human brain. We propose that particular face-sensitive nodes interact with additional object-unselective quantification modules to obtain individual estimates of facial age. This brain network processing the age of faces differs from the cortical areas that have previously been linked to less developmental but instantly changeable face aspects. Our probabilistic method of associating activations with connectivity patterns reveals an exemplary link that can be used to further study, assess and quantify structure-function relationships.
BACKGROUND: Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report’s purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. RESULTS: FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided, direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. CONCLUSION: The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 5 years ago
Human brains flexibly combine the meanings of words to compose structured thoughts. For example, by combining the meanings of “bite,” “dog,” and “man,” we can think about a dog biting a man, or a man biting a dog. Here, in two functional magnetic resonance imaging (fMRI) experiments using multivoxel pattern analysis (MVPA), we identify a region of left mid-superior temporal cortex (lmSTC) that flexibly encodes “who did what to whom” in visually presented sentences. We find that lmSTC represents the current values of abstract semantic variables (“Who did it?” and “To whom was it done?”) in distinct subregions. Experiment 1 first identifies a broad region of lmSTC whose activity patterns (i) facilitate decoding of structure-dependent sentence meaning (“Who did what to whom?”) and (ii) predict affect-related amygdala responses that depend on this information (e.g., “the baby kicked the grandfather” vs. “the grandfather kicked the baby”). Experiment 2 then identifies distinct, but neighboring, subregions of lmSTC whose activity patterns carry information about the identity of the current “agent” (“Who did it?”) and the current “patient” (“To whom was it done?”). These neighboring subregions lie along the upper bank of the superior temporal sulcus and the lateral bank of the superior temporal gyrus, respectively. At a high level, these regions may function like topographically defined data registers, encoding the fluctuating values of abstract semantic variables. This functional architecture, which in key respects resembles that of a classical computer, may play a critical role in enabling humans to flexibly generate complex thoughts.
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 5 years ago
For over a century neuroscientists have debated the dynamics by which human cortical language networks allow words to be spoken. Although it is widely accepted that Broca’s area in the left inferior frontal gyrus plays an important role in this process, it was not possible, until recently, to detail the timing of its recruitment relative to other language areas, nor how it interacts with these areas during word production. Using direct cortical surface recordings in neurosurgical patients, we studied the evolution of activity in cortical neuronal populations, as well as the Granger causal interactions between them. We found that, during the cued production of words, a temporal cascade of neural activity proceeds from sensory representations of words in temporal cortex to their corresponding articulatory gestures in motor cortex. Broca’s area mediates this cascade through reciprocal interactions with temporal and frontal motor regions. Contrary to classic notions of the role of Broca’s area in speech, while motor cortex is activated during spoken responses, Broca’s area is surprisingly silent. Moreover, when novel strings of articulatory gestures must be produced in response to nonword stimuli, neural activity is enhanced in Broca’s area, but not in motor cortex. These unique data provide evidence that Broca’s area coordinates the transformation of information across large-scale cortical networks involved in spoken word production. In this role, Broca’s area formulates an appropriate articulatory code to be implemented by motor cortex.
Grid cells in the entorhinal cortex (EC) of rodents  and humans  fire in a hexagonally distributed spatially periodic manner. In concert with other spatial cells in the medial temporal lobe (MTL) [3-6], they provide a representation of our location within an environment [7, 8] and are specifically thought to allow the represented location to be updated by self-motion . Grid-like signals have been seen throughout the autobiographical memory system , suggesting a much more general role in memory [11, 12]. Grid cells may allow us to move our viewpoint in imagination , a useful function for goal-directed navigation and planning [12, 14-16], and episodic future thinking more generally [17, 18]. We used fMRI to provide evidence for similar grid-like signals in human entorhinal cortex during both virtual navigation and imagined navigation of the same paths. We show that this signal is present in periods of active navigation and imagination, with a similar orientation in both and with the specifically 6-fold rotational symmetry characteristic of grid cell firing. We therefore provide the first evidence suggesting that grid cells are utilized during movement of viewpoint within imagery, potentially underpinning our more general ability to mentally traverse possible routes in the service of planning and episodic future thinking.
Rhythm is a central characteristic of music and speech, the most important domains of human communication using acoustic signals. Here, we investigated how rhythmical patterns in music are processed in the human brain, and, in addition, evaluated the impact of musical training on rhythm processing. Using fMRI, we found that deviations from a rule-based regular rhythmic structure activated the left planum temporale together with Broca’s area and its right-hemispheric homolog across subjects, that is, a network also crucially involved in the processing of harmonic structure in music and the syntactic analysis of language. Comparing the BOLD responses to rhythmic variations between professional jazz drummers and musical laypersons, we found that only highly trained rhythmic experts show additional activity in left-hemispheric supramarginal gyrus, a higher-order region involved in processing of linguistic syntax. This suggests an additional functional recruitment of brain areas usually dedicated to complex linguistic syntax processing for the analysis of rhythmical patterns only in professional jazz drummers, who are especially trained to use rhythmical cues for communication.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 8 years ago
Unlike nonhuman primates, songbirds learn to vocalize very much like human infants acquire spoken language. In humans, Broca’s area in the frontal lobe and Wernicke’s area in the temporal lobe are crucially involved in speech production and perception, respectively. Songbirds have analogous brain regions that show a similar neural dissociation between vocal production and auditory perception and memory. In both humans and songbirds, there is evidence for lateralization of neural responsiveness in these brain regions. Human infants already show left-sided dominance in their brain activation when exposed to speech. Moreover, a memory-specific left-sided dominance in Wernicke’s area for speech perception has been demonstrated in 2.5-mo-old babies. It is possible that auditory-vocal learning is associated with hemispheric dominance and that this association arose in songbirds and humans through convergent evolution. Therefore, we investigated whether there is similar song memory-related lateralization in the songbird brain. We exposed male zebra finches to tutor or unfamiliar song. We found left-sided dominance of neuronal activation in a Broca-like brain region (HVC, a letter-based name) of juvenile and adult zebra finch males, independent of the song stimulus presented. In addition, juvenile males showed left-sided dominance for tutor song but not for unfamiliar song in a Wernicke-like brain region (the caudomedial nidopallium). Thus, left-sided dominance in the caudomedial nidopallium was specific for the song-learning phase and was memory-related. These findings demonstrate a remarkable neural parallel between birdsong and human spoken language, and they have important consequences for our understanding of the evolution of auditory-vocal learning and its neural mechanisms.
Prior research using functional magnetic resonance imaging (fMRI) [1-4] and behavioral studies of patients with acquired or congenital amusia [5-8] suggest that the right posterior superior temporal gyrus (STG) in the human brain is specialized for aspects of music processing (for review, see [9-12]). Intracranial electrical brain stimulation in awake neurosurgery patients is a powerful means to determine the computations supported by specific brain regions and networks [13-21] because it provides reversible causal evidence with high spatial resolution (for review, see [22, 23]). Prior intracranial stimulation or cortical cooling studies have investigated musical abilities related to reading music scores [13, 14] and singing familiar songs [24, 25]. However, individuals with amusia (congenitally, or from a brain injury) have difficulty humming melodies but can be spared for singing familiar songs with familiar lyrics . Here we report a detailed study of a musician with a low-grade tumor in the right temporal lobe. Functional MRI was used pre-operatively to localize music processing to the right STG, and the patient subsequently underwent awake intraoperative mapping using direct electrical stimulation during a melody repetition task. Stimulation of the right STG induced “music arrest” and errors in pitch but did not affect language processing. These findings provide causal evidence for the functional segregation of music and language processing in the human brain and confirm a specific role of the right STG in melody processing. VIDEO ABSTRACT.