Concept: Primary auditory cortex
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
Memory failures are frustrating and often the result of ineffective encoding. One approach to improving memory outcomes is through direct modulation of brain activity with electrical stimulation. Previous efforts, however, have reported inconsistent effects when using open-loop stimulation and often target the hippocampus and medial temporal lobes. Here we use a closed-loop system to monitor and decode neural activity from direct brain recordings in humans. We apply targeted stimulation to lateral temporal cortex and report that this stimulation rescues periods of poor memory encoding. This system also improves later recall, revealing that the lateral temporal cortex is a reliable target for memory enhancement. Taken together, our results suggest that such systems may provide a therapeutic approach for treating memory dysfunction.
A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.
Tinnitus can occur when damage to the peripheral auditory system leads to spontaneous brain activity that is interpreted as sound [1, 2]. Many abnormalities of brain activity are associated with tinnitus, but it is unclear how these relate to the phantom sound itself, as opposed to predisposing factors or secondary consequences . Demonstrating “core” tinnitus correlates (processes that are both necessary and sufficient for tinnitus perception) requires high-precision recordings of neural activity combined with a behavioral paradigm in which the perception of tinnitus is manipulated and accurately reported by the subject. This has been previously impossible in animal and human research. Here we present extensive intracranial recordings from an awake, behaving tinnitus patient during short-term modifications in perceived tinnitus loudness after acoustic stimulation (residual inhibition) , permitting robust characterization of core tinnitus processes. As anticipated, we observed tinnitus-linked low-frequency (delta) oscillations [5-9], thought to be triggered by low-frequency bursting in the thalamus [10, 11]. Contrary to expectation, these delta changes extended far beyond circumscribed auditory cortical regions to encompass almost all of auditory cortex, plus large parts of temporal, parietal, sensorimotor, and limbic cortex. In discrete auditory, parahippocampal, and inferior parietal “hub” regions , these delta oscillations interacted with middle-frequency (alpha) and high-frequency (beta and gamma) activity, resulting in a coherent system of tightly coupled oscillations associated with high-level functions including memory and perception.
Memory skills strongly differ across the general population; however, little is known about the brain characteristics supporting superior memory performance. Here we assess functional brain network organization of 23 of the world’s most successful memory athletes and matched controls with fMRI during both task-free resting state baseline and active memory encoding. We demonstrate that, in a group of naive controls, functional connectivity changes induced by 6 weeks of mnemonic training were correlated with the network organization that distinguishes athletes from controls. During rest, this effect was mainly driven by connections between rather than within the visual, medial temporal lobe and default mode networks, whereas during task it was driven by connectivity within these networks. Similarity with memory athlete connectivity patterns predicted memory improvements up to 4 months after training. In conclusion, mnemonic training drives distributed rather than regional changes, reorganizing the brain’s functional network organization to enable superior memory performance.
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences.
The structure of the post-mortem human brain can be preserved by immersing the organ within a fixative solution. Once the brain is perfused, cellular and histological features are maintained over extended periods of time. However, functions of the human brain are not assumed to be preserved beyond death and subsequent chemical fixation. Here we present a series of experiments which, together, refute this assumption. Instead, we suggest that chemical preservation of brain structure results in some retained functional capacity. Patterns similar to the living condition were elicited by chemical and electrical probes within coronal and sagittal sections of human temporal lobe structures that had been maintained in ethanol-formalin-acetic acid. This was inferred by a reliable modulation of frequency-dependent microvolt fluctuations. These weak microvolt fluctuations were enhanced by receptor-specific agonists and their precursors (i.e., nicotine, 5-HTP, and L-glutamic acid) as well as attenuated by receptor-antagonists (i.e., ketamine). Surface injections of 10 nM nicotine enhanced theta power within the right parahippocampal gyrus without any effect upon the ipsilateral hippocampus. Glutamate-induced high-frequency power densities within the left parahippocampal gyrus were correlated with increased photon counts over the surface of the tissue. Heschl’s gyrus, a transverse convexity on which the primary auditory cortex is tonotopically represented, retained frequency-discrimination capacities in response to sweeps of weak (2μV) square-wave electrical pulses between 20 Hz and 20 kHz. Together, these results suggest that portions of the post-mortem human brain may retain latent capacities to respond with potential life-like and virtual properties.
In the present study, the brain’s response towards near- and supra-threshold infrasound (IS) stimulation (sound frequency < 20 Hz) was investigated under resting-state fMRI conditions. The study involved two consecutive sessions. In the first session, 14 healthy participants underwent a hearing threshold-as well as a categorical loudness scaling measurement in which the individual loudness perception for IS was assessed across different sound pressure levels (SPL). In the second session, these participants underwent three resting-state acquisitions, one without auditory stimulation (no-tone), one with a monaurally presented 12-Hz IS tone (near-threshold) and one with a similar tone above the individual hearing threshold corresponding to a 'medium loud' hearing sensation (supra-threshold). Data analysis mainly focused on local connectivity measures by means of regional homogeneity (ReHo), but also involved independent component analysis (ICA) to investigate inter-regional connectivity. ReHo analysis revealed significantly higher local connectivity in right superior temporal gyrus (STG) adjacent to primary auditory cortex, in anterior cingulate cortex (ACC) and, when allowing smaller cluster sizes, also in the right amygdala (rAmyg) during the near-threshold, compared to both the supra-threshold and the no-tone condition. Additional independent component analysis (ICA) revealed large-scale changes of functional connectivity, reflected in a stronger activation of the right amygdala (rAmyg) in the opposite contrast (no-tone > near-threshold) as well as the right superior frontal gyrus (rSFG) during the near-threshold condition. In summary, this study is the first to demonstrate that infrasound near the hearing threshold may induce changes of neural activity across several brain regions, some of which are known to be involved in auditory processing, while others are regarded as keyplayers in emotional and autonomic control. These findings thus allow us to speculate on how continuous exposure to (sub-)liminal IS could exert a pathogenic influence on the organism, yet further (especially longitudinal) studies are required in order to substantialize these findings.
A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.
Subjective tinnitus is considered a phantom auditory phenomenon. Recent studies show that electrical or magnetic stimulation of the cortex can alleviate some tinnitus. The usual target of the stimulation is the primary auditory cortex (PAC) on Heschl’s gyrus (HG). The objective of this study was to specify the anatomy of HG by magnetic resonance imaging (MRI).