SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Sound

188

BACKGROUND: Like human infants, songbirds learn their species-specific vocalizations through imitation learning. The birdsong system has emerged as a widely used experimental animal model for understanding the underlying neural mechanisms responsible for vocal production learning. However, how neural impulses are translated into precise motor behavior of the complex vocal organ (syrinx) to create song is poorly understood. First and foremost, we lack a detailed understanding of syringeal morphology. RESULTS: To fill this gap we combined non-invasive (high-field magnetic resonance imaging and micro-computed tomography) and invasive techniques (histology and micro-dissection) to construct the annotated high-resolution three-dimensional (3D) dataset, or morphome, of the zebra finch (Taeniopygia guttata) syrinx. We identified and annotated syringeal cartilage, bone, and musculature in situ in unprecedented detail. e provide interactive 3D models that greatly improve the communication of complex morphological data and of our understanding of syringeal function in general. CONCLUSIONS: Our results show that the syringeal skeleton is optimized for low weight driven by physiological constraints on song production. The present refinement of muscle organization and identity elucidates how apposed muscles actuate different syringeal elements. Our dataset allows for more precise predictions about muscle co-activation and synergies and has important implications for muscle activity and stimulation experiments. We also demonstrate how the syrinx can be stabilized during song to reduce mechanical noise and, as such, enhance repetitive execution of stereotypic motor patterns. In addition, we identify a cartilaginous structure suited to play a crucial role in the uncoupling of sound frequency and amplitude control, which permits a novel explanation to the evolutionary success of songbirds.

Concepts: Heart, Muscle, Magnetic resonance imaging, Taeniopygia, Zebra Finch, Sound, Connective tissue, Songbird

188

Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound’s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.

Concepts: Brain, Temporal lobe, Cerebrum, Auditory system, Sound, Music, Musical instrument, Timbre

177

A common approach for determining musical competence is to rely on information about individuals' extent of musical training, but relying on musicianship status fails to identify musically untrained individuals with musical skill, as well as those who, despite extensive musical training, may not be as skilled. To counteract this limitation, we developed a new test battery (Profile of Music Perception Skills; PROMS) that measures perceptual musical skills across multiple domains: tonal (melody, pitch), qualitative (timbre, tuning), temporal (rhythm, rhythm-to-melody, accent, tempo), and dynamic (loudness). The PROMS has satisfactory psychometric properties for the composite score (internal consistency and test-retest r>.85) and fair to good coefficients for the individual subtests (.56 to.85). Convergent validity was established with the relevant dimensions of Gordon’s Advanced Measures of Music Audiation and Musical Aptitude Profile (melody, rhythm, tempo), the Musical Ear Test (rhythm), and sample instrumental sounds (timbre). Criterion validity was evidenced by consistently sizeable and significant relationships between test performance and external musical proficiency indicators in all three studies (.38 to.62, p<.05 to p<.01). An absence of correlations between test scores and a nonmusical auditory discrimination task supports the battery's discriminant validity (-.05, ns). The interrelationships among the various subtests could be accounted for by two higher order factors, sequential and sensory music processing. A brief version of the full PROMS is introduced as a time-efficient approximation of the full version of the battery.

Concepts: Psychometrics, Skill, Validity, Reliability, Sound, Music, Test

177

Spin-transfer torques offer great promise for the development of spin-based devices. The effects of spin-transfer torques are typically analysed in terms of adiabatic and non-adiabatic contributions. Currently, a comprehensive interpretation of the non-adiabatic term remains elusive, with suggestions that it may arise from universal effects related to dissipation processes in spin dynamics, while other studies indicate a strong influence from the symmetry of magnetization gradients. Here we show that enhanced magnetic imaging under dynamic excitation can be used to differentiate between non-adiabatic spin-torque and extraneous influences. We combine Lorentz microscopy with gigahertz excitations to map the orbit of a magnetic vortex core with <5 nm resolution. Imaging of the gyrotropic motion reveals subtle changes in the ellipticity, amplitude and tilt of the orbit as the vortex is driven through resonance, providing a robust method to determine the non-adiabatic spin torque parameter β=0.15±0.02 with unprecedented precision, independent of external effects.

Concepts: Magnetic field, Angular momentum, Fundamental physics concepts, Torque, Magnetism, Magnetic moment, Force, Sound

175

In anurans reproductive behavior is strongly seasonal. During the spring, frogs emerge from hibernation and males vocalize for mating or advertising territories. Female frogs have the ability to evaluate the quality of the males' resources on the basis of these vocalizations. Although studies revealed that central single torus semicircularis neurons in frogs exhibit season plasticity, the plasticity of peripheral auditory sensitivity in frog is unknown. In this study the seasonally plasticity of peripheral auditory sensitivity was test in the Emei music frog Babina daunchina, by comparing thresholds and latencies of auditory brainstem responses (ABRs) evoked by tone pips and clicks in the reproductive and non-reproductive seasons. The results show that both ABR thresholds and latency differ significantly between the reproductive and non-reproductive seasons. The thresholds of tone pip evoked ABRs in the non-reproductive season increased significantly about 10 dB than those in the reproductive season for frequencies from 1 KHz to 6 KHz. ABR latencies to waveform valley values for tone pips for the same frequencies using appropriate threshold stimulus levels are longer than those in the reproductive season for frequencies from 1.5 to 6 KHz range, although from 0.2 to 1.5 KHz range it is shorter in the non-reproductive season. These results demonstrated that peripheral auditory frequency sensitivity exhibits seasonal plasticity changes which may be adaptive to seasonal reproductive behavior in frogs.

Concepts: Frequency, Hertz, Season, Sound, Spring, Frog, Winter, Seasons

172

For the perception of timbre of a musical instrument, the attack time is known to hold crucial information. The first 50 to 150 ms of sound onset reflect the excitation mechanism, which generates the sound. Since auditory processing and music perception in particular are known to be hampered in cochlear implant (CI) users, we conducted an electroencephalography (EEG) study with an oddball paradigm to evaluate the processing of small differences in musical sound onset. The first 60 ms of a cornet sound were manipulated in order to examine whether these differences are detected by CI users and normal-hearing controls (NH controls), as revealed by auditory evoked potentials (AEPs). Our analysis focused on the N1 as an exogenous component known to reflect physical stimuli properties as well as on the P2 and the Mismatch Negativity (MMN). Our results revealed different N1 latencies as well as P2 amplitudes and latencies for the onset manipulations in both groups. An MMN could be elicited only in the NH control group. Together with additional findings that suggest an impact of musical training on CI users' AEPs, our findings support the view that impaired timbre perception in CI users is at partly due to altered sound onset feature detection.

Concepts: Electroencephalography, Evoked potential, Sound, Music, Event-related potential, Musical instrument, Orchestra, Trumpet

171

Older adults frequently complain that while they can hear a person talking, they cannot understand what is being said; this difficulty is exacerbated by background noise. Peripheral hearing loss cannot fully account for this age-related decline in speech-in-noise ability, as declines in central processing also contribute to this problem. Given that musicians have enhanced speech-in-noise perception, we aimed to define the effects of musical experience on subcortical responses to speech and speech-in-noise perception in middle-aged adults. Results reveal that musicians have enhanced neural encoding of speech in quiet and noisy settings. Enhancements include faster neural response timing, higher neural response consistency, more robust encoding of speech harmonics, and greater neural precision. Taken together, we suggest that musical experience provides perceptual benefits in an aging population by strengthening the underlying neural pathways necessary for the accurate representation of important temporal and spectral features of sound.

Concepts: Understanding, Middle age, Sense, Acoustics, Aging, Sound, Hearing, Music

170

Amplitude modulation can serve as a cue for segregating streams of sounds from different sources. Here we evaluate stream segregation in humans using ABA- sequences of sinusoidally amplitude modulated (SAM) tones. A and B represent SAM tones with the same carrier frequency (1000, 4000 Hz) and modulation depth (30, 100%). The modulation frequency of the A signals (f(modA)) was 30, 100 or 300 Hz, respectively. The modulation frequency of the B signals was up to four octaves higher (Δf(mod)). Three different ABA- tone patterns varying in tone duration and stimulus onset asynchrony were presented to evaluate the effect of forward suppression. Subjects indicated their 1- or 2-stream percept on a touch screen at the end of each ABA- sequence (presentation time 5 or 15 s). Tone pattern, f(modA), Δf(mod), carrier frequency, modulation depth and presentation time significantly affected the percentage of a 2-stream percept. The human psychophysical results are compared to responses of avian forebrain neurons evoked by different ABA- SAM tone conditions [1] that were broadly overlapping those of the present study. The neurons also showed significant effects of tone pattern and Δf(mod) that were comparable to effects observed in the present psychophysical study. Depending on the carrier frequency, modulation frequency, modulation depth and the width of the auditory filters, SAM tones may provide mainly temporal cues (sidebands fall within the range of the filter), spectral cues (sidebands fall outside the range of the filter) or possibly both. A computational model based on excitation pattern differences was used to predict the 50% threshold of 2-stream responses. In conditions for which the model predicts a considerably larger 50% threshold of 2-stream responses (i.e., larger Δf(mod) at threshold) than was observed, it is unlikely that spectral cues can provide an explanation of stream segregation by SAM.

Concepts: Modulation, Sound, Frequency modulation, Amplitude modulation, Carrier wave, Baseband, Single-sideband modulation, Quadrature amplitude modulation

168

The auditory pathways coursing through the brainstem are organized bilaterally in mirror image about the midline and at several levels the two sides are interconnected. One of the most prominent points of interconnection is the commissure of the inferior colliculus (CoIC). Anatomical studies have revealed that these fibers make reciprocal connections which follow the tonotopic organization of the inferior colliculus (IC), and that the commissure contains both excitatory and, albeit fewer, inhibitory fibers. The role of these connections in sound processing is largely unknown. Here we describe a method to address this question in the anaesthetized guinea pig. We used a cryoloop placed on one IC to produce reversible deactivation while recording electrophysiological responses to sounds in both ICs. We recorded single units, multi-unit clusters and local field potentials (LFPs) before, during and after cooling. The degree and spread of cooling was measured with a thermocouple placed in the IC and other auditory structures. Cooling sufficient to eliminate firing was restricted to the IC contacted by the cryoloop. The temperature of other auditory brainstem structures, including the contralateral IC and the cochlea were minimally affected. Cooling below 20°C reduced or eliminated the firing of action potentials in frequency laminae at depths corresponding to characteristic frequencies up to ~8 kHz. Modulation of neural activity also occurred in the un-cooled IC with changes in single unit firing and LFPs. Components of LFPs signaling lemniscal afferent input to the IC showed little change in amplitude or latency with cooling, whereas the later components, which likely reflect inter- and intra-collicular processing, showed marked changes in form and amplitude. We conclude that the cryoloop is an effective method of selectively deactivating one IC in guinea pig, and demonstrate that auditory processing in the IC is strongly influenced by the other.

Concepts: Action potential, Electrophysiology, Auditory system, Sound, Superior colliculus, Excitatory postsynaptic potential, Inhibitory postsynaptic potential, Inferior colliculus

164

Toothed whales and bats have independently evolved biosonar systems to navigate and locate and catch prey. Such active sensing allows them to operate in darkness, but with the potential cost of warning prey by the emission of intense ultrasonic signals. At least six orders of nocturnal insects have independently evolved ears sensitive to ultrasound and exhibit evasive maneuvers when exposed to bat calls. Among aquatic prey on the other hand, the ability to detect and avoid ultrasound emitting predators seems to be limited to only one subfamily of Clupeidae: the Alosinae (shad and menhaden). These differences are likely rooted in the different physical properties of air and water where cuticular mechanoreceptors have been adapted to serve as ultrasound sensitive ears, whereas ultrasound detection in water have called for sensory cells mechanically connected to highly specialized gas volumes that can oscillate at high frequencies. In addition, there are most likely differences in the risk of predation between insects and fish from echolocating predators. The selection pressure among insects for evolving ultrasound sensitive ears is high, because essentially all nocturnal predation on flying insects stems from echolocating bats. In the interaction between toothed whales and their prey the selection pressure seems weaker, because toothed whales are by no means the only marine predators placing a selection pressure on their prey to evolve specific means to detect and avoid them. Toothed whales can generate extremely intense sound pressure levels, and it has been suggested that they may use these to debilitate prey. Recent experiments, however, show that neither fish with swim bladders, nor squid are debilitated by such signals. This strongly suggests that the production of high amplitude ultrasonic clicks serve the function of improving the detection range of the toothed whale biosonar system rather than debilitation of prey.

Concepts: Evolution, Insect, Predation, Lotka–Volterra equation, Ultrasound, Animal echolocation, Sound, Bat