Concept: Temporal lobe
Atrophy of the medial temporal lobe (MTL) occurs with aging, resulting in impaired episodic memory. Aerobic fitness is positively correlated with total hippocampal volume, a heavily studied memory-critical region within the MTL. However, research on associations between sedentary behavior and MTL subregion integrity is limited. Here we explore associations between thickness of the MTL and its subregions (namely CA1, CA23DG, fusiform gyrus, subiculum, parahippocampal, perirhinal and entorhinal cortex,), physical activity, and sedentary behavior. We assessed 35 non-demented middle-aged and older adults (25 women, 10 men; 45-75 years) using the International Physical Activity Questionnaire for older adults, which quantifies physical activity levels in MET-equivalent units and asks about the average number of hours spent sitting per day. All participants had high resolution MRI scans performed on a Siemens Allegra 3T MRI scanner, which allows for detailed investigation of the MTL. Controlling for age, total MTL thickness correlated inversely with hours of sitting/day (r = -0.37, p = 0.03). In MTL subregion analysis, parahippocampal (r = -0.45, p = 0.007), entorhinal (r = -0.33, p = 0.05) cortical and subiculum (r = -0.36, p = .04) thicknesses correlated inversely with hours of sitting/day. No significant correlations were observed between physical activity levels and MTL thickness. Though preliminary, our results suggest that more sedentary non-demented individuals have less MTL thickness. Future studies should include longitudinal analyses and explore mechanisms, as well as the efficacy of decreasing sedentary behaviors to reverse this association.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
Memory failures are frustrating and often the result of ineffective encoding. One approach to improving memory outcomes is through direct modulation of brain activity with electrical stimulation. Previous efforts, however, have reported inconsistent effects when using open-loop stimulation and often target the hippocampus and medial temporal lobes. Here we use a closed-loop system to monitor and decode neural activity from direct brain recordings in humans. We apply targeted stimulation to lateral temporal cortex and report that this stimulation rescues periods of poor memory encoding. This system also improves later recall, revealing that the lateral temporal cortex is a reliable target for memory enhancement. Taken together, our results suggest that such systems may provide a therapeutic approach for treating memory dysfunction.
Inflammation impairs cognitive performance and is implicated in the progression of neurodegenerative disorders. Rodent studies demonstrated key roles for inflammatory mediators in many processes critical to memory, including long-term potentiation, synaptic plasticity, and neurogenesis. They also demonstrated functional impairment of medial temporal lobe (MTL) structures by systemic inflammation. However, human data to support this position are limited.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound’s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.
Neural circuits underlying mother’s voice perception predict social communication abilities in children
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 3 years ago
The human voice is a critical social cue, and listeners are extremely sensitive to the voices in their environment. One of the most salient voices in a child’s life is mother’s voice: Infants discriminate their mother’s voice from the first days of life, and this stimulus is associated with guiding emotional and social function during development. Little is known regarding the functional circuits that are selectively engaged in children by biologically salient voices such as mother’s voice or whether this brain activity is related to children’s social communication abilities. We used functional MRI to measure brain activity in 24 healthy children (mean age, 10.2 y) while they attended to brief (<1 s) nonsense words produced by their biological mother and two female control voices and explored relationships between speech-evoked neural activity and social function. Compared to female control voices, mother's voice elicited greater activity in primary auditory regions in the midbrain and cortex; voice-selective superior temporal sulcus (STS); the amygdala, which is crucial for processing of affect; nucleus accumbens and orbitofrontal cortex of the reward circuit; anterior insula and cingulate of the salience network; and a subregion of fusiform gyrus associated with face perception. The strength of brain connectivity between voice-selective STS and reward, affective, salience, memory, and face-processing regions during mother's voice perception predicted social communication skills. Our findings provide a novel neurobiological template for investigation of typical social development as well as clinical disorders, such as autism, in which perception of biologically and socially salient voices may be impaired.
The association between human hippocampal structure and topographical memory was investigated in healthy adults (N = 30). Structural MR images were acquired, and voxel-based morphometry (VBM) was used to estimate local gray matter volume throughout the brain. A complementary automated mesh-based segmentation approach was used to independently isolate and measure specified structures including the hippocampus. Topographical memory was assessed using a version of the Four Mountains Task, a short test designed to target hippocampal spatial function. Each item requires subjects to briefly study a landscape scene before recognizing the depicted place from a novel viewpoint and under altered non-spatial conditions when presented amongst similar alternative scenes. Positive correlations between topographical memory performance and hippocampal volume were observed in both VBM and segmentation-based analyses. Score on the topographical memory task was also correlated with the volume of some subcortical structures, extra-hippocampal gray matter, and total brain volume, with the most robust and extensive covariation seen in circumscribed neocortical regions in the insula and anterior temporal lobes. Taken together with earlier findings, the results suggest that global variations in brain morphology affect the volume of the hippocampus and its specific contribution to topographical memory. We speculate that behavioral variation might arise directly through the impact of resource constraints on spatial representations in the hippocampal formation and its inputs, and perhaps indirectly through an increased reliance on non-allocentric strategies.
A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.
The corpus callosum is hypothesized to play a fundamental role in integrating information and mediating complex behaviors. Here, we demonstrate that lack of normal callosal development can lead to deficits in functional connectivity that are related to impairments in specific cognitive domains. We examined resting-state functional connectivity in individuals with agenesis of the corpus callosum (AgCC) and matched controls using magnetoencephalographic imaging (MEG-I) of coherence in the alpha (8-12 Hz), beta (12-30 Hz) and gamma (30-55 Hz) bands. Global connectivity (GC) was defined as synchronization between a region and the rest of the brain. In AgCC individuals, alpha band GC was significantly reduced in the dorsolateral pre-frontal (DLPFC), posterior parietal (PPC) and parieto-occipital cortices (PO). No significant differences in GC were seen in either the beta or gamma bands. We also explored the hypothesis that, in AgCC, this regional reduction in functional connectivity is explained primarily by a specific reduction in interhemispheric connectivity. However, our data suggest that reduced connectivity in these regions is driven by faulty coupling in both inter- and intrahemispheric connectivity. We also assessed whether the degree of connectivity correlated with behavioral performance, focusing on cognitive measures known to be impaired in AgCC individuals. Neuropsychological measures of verbal processing speed were significantly correlated with resting-state functional connectivity of the left medial and superior temporal lobe in AgCC participants. Connectivity of DLPFC correlated strongly with performance on the Tower of London in the AgCC cohort. These findings indicate that the abnormal callosal development produces salient but selective (alpha band only) resting-state functional connectivity disruptions that correlate with cognitive impairment. Understanding the relationship between impoverished functional connectivity and cognition is a key step in identifying the neural mechanisms of language and executive dysfunction in common neurodevelopmental and psychiatric disorders where disruptions of callosal development are consistently identified.
BACKGROUND: Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report’s purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. RESULTS: FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided, direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. CONCLUSION: The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings.