SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Hearing

240

Recent neuroscience research suggests that tinnitus may reflect synaptic loss in the cochlea that does not express in the audiogram but leads to neural changes in auditory pathways that reduce sound level tolerance (SLT). Adolescents (N = 170) completed a questionnaire addressing their prior experience with tinnitus, potentially risky listening habits, and sensitivity to ordinary sounds, followed by psychoacoustic measurements in a sound booth. Among all adolescents 54.7% reported by questionnaire that they had previously experienced tinnitus, while 28.8% heard tinnitus in the booth. Psychoacoustic properties of tinnitus measured in the sound booth corresponded with those of chronic adult tinnitus sufferers. Neither hearing thresholds (≤15 dB HL to 16 kHz) nor otoacoustic emissions discriminated between adolescents reporting or not reporting tinnitus in the sound booth, but loudness discomfort levels (a psychoacoustic measure of SLT) did so, averaging 11.3 dB lower in adolescents experiencing tinnitus in the acoustic chamber. Although risky listening habits were near universal, the teenagers experiencing tinnitus and reduced SLT tended to be more protective of their hearing. Tinnitus and reduced SLT could be early indications of a vulnerability to hidden synaptic injury that is prevalent among adolescents and expressed following exposure to high level environmental sounds.

Concepts: Auditory system, Acoustics, Ultrasound, Ear, Sound, Audiogram, Hearing

171

Older adults frequently complain that while they can hear a person talking, they cannot understand what is being said; this difficulty is exacerbated by background noise. Peripheral hearing loss cannot fully account for this age-related decline in speech-in-noise ability, as declines in central processing also contribute to this problem. Given that musicians have enhanced speech-in-noise perception, we aimed to define the effects of musical experience on subcortical responses to speech and speech-in-noise perception in middle-aged adults. Results reveal that musicians have enhanced neural encoding of speech in quiet and noisy settings. Enhancements include faster neural response timing, higher neural response consistency, more robust encoding of speech harmonics, and greater neural precision. Taken together, we suggest that musical experience provides perceptual benefits in an aging population by strengthening the underlying neural pathways necessary for the accurate representation of important temporal and spectral features of sound.

Concepts: Understanding, Middle age, Sense, Acoustics, Aging, Sound, Hearing, Music

171

Extensive research shows that inter-talker variability (i.e., changing the talker) affects recognition memory for speech signals. However, relatively little is known about the consequences of intra-talker variability (i.e. changes in speaking style within a talker) on the encoding of speech signals in memory. It is well established that speakers can modulate the characteristics of their own speech and produce a listener-oriented, intelligibility-enhancing speaking style in response to communication demands (e.g., when speaking to listeners with hearing impairment or non-native speakers of the language). Here we conducted two experiments to examine the role of speaking style variation in spoken language processing. First, we examined the extent to which clear speech provided benefits in challenging listening environments (i.e. speech-in-noise). Second, we compared recognition memory for sentences produced in conversational and clear speaking styles. In both experiments, semantically normal and anomalous sentences were included to investigate the role of higher-level linguistic information in the processing of speaking style variability. The results show that acoustic-phonetic modifications implemented in listener-oriented speech lead to improved speech recognition in challenging listening conditions and, crucially, to a substantial enhancement in recognition memory for sentences.

Concepts: Linguistics, Language, Grammar, Audiogram, Hearing, Speech recognition, Speech, Speech processing

168

Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss.

Concepts: Randomized controlled trial, Auditory system, Hearing impairment, Audiogram, Hearing, Active listening

49

Dietary supplements consisting of beta-carotene (precursor to vitamin A), vitamins C and E and the mineral magnesium (ACEMg) can be beneficial for reducing hearing loss due to aminoglycosides and overstimulation. This regimen also slowed progression of deafness for a boy with GJB2 (CONNEXIN 26) mutations. To assess the potential for treating GJB2 and other forms of hereditary hearing loss with ACEMg, we tested the influence of ACEMg on the cochlea and hearing of mouse models for two human mutations: GJB2, the leading cause of childhood deafness, and DIAPH3, a cause of auditory neuropathy. One group of mice modeling GJB2 (Gjb2-CKO) received ACEMg diet starting shortly after they were weaned (4 weeks) until 16 weeks of age. Another group of Gjb2-CKO mice received ACEMg in utero and after weaning. The ACEMg diet was given to mice modeling DIAPH3 (Diap3-Tg) after weaning (4 weeks) until 12 weeks of age. Control groups received food pellets without the ACEMg supplement. Hearing thresholds measured by auditory brainstem response were significantly better for Gjb2-CKO mice fed ACEMg than for the control diet group. In contrast, Diap3-Tg mice displayed worse thresholds than controls. These results indicate that ACEMg supplementation can influence the progression of genetic hearing loss.

Concepts: Vitamin, Dietary supplement, Cochlea, Hearing impairment, Cochlear implant, Audiogram, Hearing

34

Recent work suggests that hair cells are not the most vulnerable elements in the inner ear; rather, it is the synapses between hair cells and cochlear nerve terminals that degenerate first in the aging or noise-exposed ear. This primary neural degeneration does not affect hearing thresholds, but likely contributes to problems understanding speech in difficult listening environments, and may be important in the generation of tinnitus and/or hyperacusis. To look for signs of cochlear synaptopathy in humans, we recruited college students and divided them into low-risk and high-risk groups based on self-report of noise exposure and use of hearing protection. Cochlear function was assessed by otoacoustic emissions and click-evoked electrocochleography; hearing was assessed by behavioral audiometry and word recognition with or without noise or time compression and reverberation. Both groups had normal thresholds at standard audiometric frequencies, however, the high-risk group showed significant threshold elevation at high frequencies (10-16 kHz), consistent with early stages of noise damage. Electrocochleography showed a significant difference in the ratio between the waveform peaks generated by hair cells (Summating Potential; SP) vs. cochlear neurons (Action Potential; AP), i.e. the SP/AP ratio, consistent with selective neural loss. The high-risk group also showed significantly poorer performance on word recognition in noise or with time compression and reverberation, and reported heightened reactions to sound consistent with hyperacusis. These results suggest that the SP/AP ratio may be useful in the diagnosis of “hidden hearing loss” and that, as suggested by animal models, the noise-induced loss of cochlear nerve synapses leads to deficits in hearing abilities in difficult listening situations, despite the presence of normal thresholds at standard audiometric frequencies.

Concepts: Neuron, Action potential, Auditory system, Cochlea, Ear, Tinnitus, Sound, Hearing

28

To understand why human sensitivity for complex objects is so low, we study how word identification combines eye and ear or parts of a word (features, letters, syllables). Our observers identify printed and spoken words presented concurrently or separately. When researchers measure threshold (energy of the faintest visible or audible signal) they may report either sensitivity (one over the human threshold) or efficiency (ratio of the best possible threshold to the human threshold). When the best possible algorithm identifies an object (like a word) in noise, its threshold is independent of how many parts the object has. But, with human observers, efficiency depends on the task. In some tasks, human observers combine parts efficiently, needing hardly more energy to identify an object with more parts. In other tasks, they combine inefficiently, needing energy nearly proportional to the number of parts, over a 60∶1 range. Whether presented to eye or ear, efficiency for detecting a short sinusoid (tone or grating) with few features is a substantial 20%, while efficiency for identifying a word with many features is merely 1%. Why? We show that the low human sensitivity for words is a cost of combining their many parts. We report a dichotomy between inefficient combining of adjacent features and efficient combining across senses. Joining our results with a survey of the cue-combination literature reveals that cues combine efficiently only if they are perceived as aspects of the same object. Observers give different names to adjacent letters in a word, and combine them inefficiently. Observers give the same name to a word’s image and sound, and combine them efficiently. The brain’s machinery optimally combines only cues that are perceived as originating from the same object. Presumably such cues each find their own way through the brain to arrive at the same object representation.

Concepts: Psychology, Brain, Cognition, Sense, Sound, Identification, Hearing

28

BACKGROUND: The increasing fragmentation of healthcare has resulted in more patient handoffs. Many professional groups, including the Accreditation Council on Graduate Medical Education and the Society of Hospital Medicine, have made recommendations for safe and effective handoffs. Despite the two-way nature of handoff communication, the focus of these efforts has largely been on the person giving information. OBJECTIVE: To observe and characterise the listening behaviours of handoff receivers during hospitalist handoffs. DESIGN: Prospective observational study of shift change and service change handoffs on a non-teaching hospitalist service at a single academic tertiary care institution. MEASUREMENTS: The ‘HEAR Checklist’, a novel tool created based on review of effective listening behaviours, was used by third party observers to characterise active and passive listening behaviours and interruptions during handoffs. RESULTS: In 48 handoffs (25 shift change, 23 service change), active listening behaviours (eg, read-back (17%), note-taking (23%) and reading own copy of the written signout (27%)) occurred less frequently than passive listening behaviours (eg, affirmatory statements (56%) nodding (50%) and eye contact (58%)) (p<0.01). Read-back occurred only eight times (17%). In 11 handoffs (23%) receivers took notes. Almost all (98%) handoffs were interrupted at least once, most often by side conversations, pagers going off, or clinicians arriving. Handoffs with more patients, such as service change, were associated with more interruptions (r=0.46, p<0.01). CONCLUSIONS: Using the 'HEAR Checklist', we can characterise hospitalist handoff listening behaviours. While passive listening behaviours are common, active listening behaviours that promote memory retention are rare. Handoffs are often interrupted, most commonly by side conversations. Future handoff improvement efforts should focus on augmenting listening and minimising interruptions.

Concepts: Psychology, Medicine, Observational study, Observation, Hearing, Focus, Active listening

27

Surround sound systems are produced with the intention of reproducing the spatial aspects of sound, such as localization and envelopment. As part of his work on Ambisonics, Gerzon developed two metrics, the velocity and energy localization vectors, which are intended to predict the localization performance of a system. These are used during the design process to optimize the decoder that supplies signals to the loudspeaker array. At best, subjective listening tests are conducted on the finished system, but no objective assessments of the spatial qualities are made to verify that the realized performance correlates the predictions. In the present work, binaural recordings were made of a 3-D 24-loudspeaker installation at Stanford’s Bing Studio. Test signals were used to acquire the binaural impulse response of each loudspeaker in the array and of Ambisonic reproduction using the loudspeaker array. The measurements were repeated at several locations within the hall. Subsequent analysis calculated the ITDs and ILDs for all cases. Initial results from the analysis of the ITDs and ILDs for the center listening position show ITDs, which correspond very closely to what is expected in natural hearing, and ILDs, which are similar to natural hearing.

Concepts: Assessment, Sound, Hearing, Stereophonic sound, Surround sound, Ambisonics, Soundfield microphone, Michael Gerzon

27

OBJECTIVES: To investigate listening habits and hearing risks associated with the use of personal listening devices among urban high school students in Malaysia. STUDY DESIGN: Cross-sectional, descriptive study. METHODS: In total, 177 personal listening device users (13-16 years old) were interviewed to elicit their listening habits (e.g. listening duration, volume setting) and symptoms of hearing loss. Their listening levels were also determined by asking them to set their usual listening volume on an Apple iPod TM playing a pre-selected song. The iPod’s sound output was measured with an artificial ear connected to a sound level meter. Subjects also underwent pure tone audiometry to ascertain their hearing thresholds at standard frequencies (0.5-8 kHz) and extended high frequencies (9-16 kHz). RESULTS: The mean measured listening level and listening duration for all subjects were 72.2 dBA and 1.2 h/day, respectively. Their self-reported listening levels were highly correlated with the measured levels (P < 0.001). Subjects who listened at higher volumes also tend to listen for longer durations (P = 0.012). Male subjects listened at a significantly higher volume than female subjects (P = 0.008). When sound exposure levels were compared with the recommended occupational noise exposure limit, 4.5% of subjects were found to be listening at levels which require mandatory hearing protection in the occupational setting. Hearing loss (≥25 dB hearing level at one or more standard test frequencies) was detected in 7.3% of subjects. Subjects' sound exposure levels from the devices were positively correlated with their hearing thresholds at two of the extended high frequencies (11.2 and 14 kHz), which could indicate an early stage of noise-induced hearing loss. CONCLUSIONS: Although the average high school student listened at safe levels, a small percentage of listeners were exposed to harmful sound levels. Preventive measures are needed to avoid permanent hearing damage in high-risk listeners.

Concepts: Hearing impairment, High school, Tinnitus, Sound, Sound pressure, Audiogram, Hearing, Noise pollution