Concept: Superior temporal gyrus
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
BACKGROUND: Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report’s purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. RESULTS: FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided, direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. CONCLUSION: The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings.
Cognitive models claim that spoken words are recognized by an optimally efficient sequential analysis process. Evidence for this is the finding that nonwords are recognized as soon as they deviate from all real words (Marslen-Wilson 1984), reflecting continuous evaluation of speech inputs against lexical representations. Here, we investigate the brain mechanisms supporting this core aspect of word recognition and examine the processes of competition and selection among multiple word candidates. Based on new behavioral support for optimal efficiency in lexical access from speech, a functional magnetic resonance imaging study showed that words with later nonword points generated increased activation in the left superior and middle temporal gyrus (Brodmann area [BA] 21/22), implicating these regions in dynamic sound-meaning mapping. We investigated competition and selection by manipulating the number of initially activated word candidates (competition) and their later drop-out rate (selection). Increased lexical competition enhanced activity in bilateral ventral inferior frontal gyrus (BA 47/45), while increased lexical selection demands activated bilateral dorsal inferior frontal gyrus (BA 44/45). These findings indicate functional differentiation of the fronto-temporal systems for processing spoken language, with left middle temporal gyrus (MTG) and superior temporal gyrus (STG) involved in mapping sounds to meaning, bilateral ventral inferior frontal gyrus (IFG) engaged in less constrained early competition processing, and bilateral dorsal IFG engaged in later, more fine-grained selection processes.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 4 years ago
Human brains flexibly combine the meanings of words to compose structured thoughts. For example, by combining the meanings of “bite,” “dog,” and “man,” we can think about a dog biting a man, or a man biting a dog. Here, in two functional magnetic resonance imaging (fMRI) experiments using multivoxel pattern analysis (MVPA), we identify a region of left mid-superior temporal cortex (lmSTC) that flexibly encodes “who did what to whom” in visually presented sentences. We find that lmSTC represents the current values of abstract semantic variables (“Who did it?” and “To whom was it done?”) in distinct subregions. Experiment 1 first identifies a broad region of lmSTC whose activity patterns (i) facilitate decoding of structure-dependent sentence meaning (“Who did what to whom?”) and (ii) predict affect-related amygdala responses that depend on this information (e.g., “the baby kicked the grandfather” vs. “the grandfather kicked the baby”). Experiment 2 then identifies distinct, but neighboring, subregions of lmSTC whose activity patterns carry information about the identity of the current “agent” (“Who did it?”) and the current “patient” (“To whom was it done?”). These neighboring subregions lie along the upper bank of the superior temporal sulcus and the lateral bank of the superior temporal gyrus, respectively. At a high level, these regions may function like topographically defined data registers, encoding the fluctuating values of abstract semantic variables. This functional architecture, which in key respects resembles that of a classical computer, may play a critical role in enabling humans to flexibly generate complex thoughts.
In the present study, the brain’s response towards near- and supra-threshold infrasound (IS) stimulation (sound frequency < 20 Hz) was investigated under resting-state fMRI conditions. The study involved two consecutive sessions. In the first session, 14 healthy participants underwent a hearing threshold-as well as a categorical loudness scaling measurement in which the individual loudness perception for IS was assessed across different sound pressure levels (SPL). In the second session, these participants underwent three resting-state acquisitions, one without auditory stimulation (no-tone), one with a monaurally presented 12-Hz IS tone (near-threshold) and one with a similar tone above the individual hearing threshold corresponding to a 'medium loud' hearing sensation (supra-threshold). Data analysis mainly focused on local connectivity measures by means of regional homogeneity (ReHo), but also involved independent component analysis (ICA) to investigate inter-regional connectivity. ReHo analysis revealed significantly higher local connectivity in right superior temporal gyrus (STG) adjacent to primary auditory cortex, in anterior cingulate cortex (ACC) and, when allowing smaller cluster sizes, also in the right amygdala (rAmyg) during the near-threshold, compared to both the supra-threshold and the no-tone condition. Additional independent component analysis (ICA) revealed large-scale changes of functional connectivity, reflected in a stronger activation of the right amygdala (rAmyg) in the opposite contrast (no-tone > near-threshold) as well as the right superior frontal gyrus (rSFG) during the near-threshold condition. In summary, this study is the first to demonstrate that infrasound near the hearing threshold may induce changes of neural activity across several brain regions, some of which are known to be involved in auditory processing, while others are regarded as keyplayers in emotional and autonomic control. These findings thus allow us to speculate on how continuous exposure to (sub-)liminal IS could exert a pathogenic influence on the organism, yet further (especially longitudinal) studies are required in order to substantialize these findings.
Neurocognitive models and previous neuroimaging work posit that auditory verbal hallucinations (AVH) arise due to increased activity in speech-sensitive regions of the left posterior superior temporal gyrus (STG). Here, we examined if patients with schizophrenia (SCZ) and AVH could be trained to down-regulate STG activity using real-time functional magnetic resonance imaging neurofeedback (rtfMRI-NF). We also examined the effects of rtfMRI-NF training on functional connectivity between the STG and other speech and language regions. Twelve patients with SCZ and treatment-refractory AVH were recruited to participate in the study and were trained to down-regulate STG activity using rtfMRI-NF, over four MRI scanner visits during a 2-week training period. STG activity and functional connectivity were compared pre- and post-training. Patients successfully learnt to down-regulate activity in their left STG over the rtfMRI-NF training. Post- training, patients showed increased functional connectivity between the left STG, the left inferior prefrontal gyrus (IFG) and the inferior parietal gyrus. The post-training increase in functional connectivity between the left STG and IFG was associated with a reduction in AVH symptoms over the training period. The speech-sensitive region of the left STG is a suitable target region for rtfMRI-NF in patients with SCZ and treatment-refractory AVH. Successful down-regulation of left STG activity can increase functional connectivity between speech motor and perception regions. These findings suggest that patients with AVH have the ability to alter activity and connectivity in speech and language regions, and raise the possibility that rtfMRI-NF training could present a novel therapeutic intervention in SCZ.
This study examined how the brain system adapts and reconfigures its information processing capabilities to maintain cognitive performance after a key cortical center [left posterior superior temporal gyrus (LSTGp)] is temporarily impaired during the performance of a language comprehension task. By applying repetitive transcranial magnetic stimulation (rTMS) to LSTGp and concurrently assessing the brain response with functional magnetic resonance imaging, we found that adaptation consisted of 1) increased synchronization between compensating regions coupled with a decrease in synchronization within the primary language network and 2) a decrease in activation at the rTMS site as well as in distal regions, followed by their recovery. The compensatory synchronization included 3 centers: The contralateral homolog (RSTGp) of the area receiving rTMS, areas adjacent to the rTMS site, and a region involved in discourse monitoring (medial frontal gyrus). This approach reveals some principles of network-level adaptation to trauma with potential application to traumatic brain injury, stroke, and seizure.
Prior research using functional magnetic resonance imaging (fMRI) [1-4] and behavioral studies of patients with acquired or congenital amusia [5-8] suggest that the right posterior superior temporal gyrus (STG) in the human brain is specialized for aspects of music processing (for review, see [9-12]). Intracranial electrical brain stimulation in awake neurosurgery patients is a powerful means to determine the computations supported by specific brain regions and networks [13-21] because it provides reversible causal evidence with high spatial resolution (for review, see [22, 23]). Prior intracranial stimulation or cortical cooling studies have investigated musical abilities related to reading music scores [13, 14] and singing familiar songs [24, 25]. However, individuals with amusia (congenitally, or from a brain injury) have difficulty humming melodies but can be spared for singing familiar songs with familiar lyrics . Here we report a detailed study of a musician with a low-grade tumor in the right temporal lobe. Functional MRI was used pre-operatively to localize music processing to the right STG, and the patient subsequently underwent awake intraoperative mapping using direct electrical stimulation during a melody repetition task. Stimulation of the right STG induced “music arrest” and errors in pitch but did not affect language processing. These findings provide causal evidence for the functional segregation of music and language processing in the human brain and confirm a specific role of the right STG in melody processing. VIDEO ABSTRACT.
Subjective tinnitus is considered a phantom auditory phenomenon. Recent studies show that electrical or magnetic stimulation of the cortex can alleviate some tinnitus. The usual target of the stimulation is the primary auditory cortex (PAC) on Heschl’s gyrus (HG). The objective of this study was to specify the anatomy of HG by magnetic resonance imaging (MRI).
People appear to derive intrinsic satisfaction from the perception that they are unique, special, and separable from the masses, which is referred to as a need for uniqueness (NFU). NFU is a universal human trait, along with a tendency to conform to the beliefs and attitudes of others and social norms. We used voxel-based morphometry and a questionnaire to determine individual NFU and its association with brain structures in healthy men (94) and women (91; age, 21.3±1.9years). Individual NFU was associated with smaller gray matter volume of a cluster that included areas in (a) the left middle temporal gyrus, left superior temporal gyrus, and left superior temporal sulcus (STS); (b) the dorsal part of the anterior cingulate gyrus and the anterior part of the middle cingulate gyrus; and © the right inferior frontal gyrus and the ventral part of the precentral gyrus. Individual NFU was also associated with larger white matter concentration of a cluster that mainly included the body of the corpus callosum. These findings demonstrated that variations in NFU reflect the gray and white matter structures of focal regions. These findings suggest a biological basis for individual NFU, distributed across different gray and white matter areas of the brain.