SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Sound

354

Transcranial focused ultrasound (FUS) is capable of modulating the neural activity of specific brain regions, with a potential role as a non-invasive computer-to-brain interface (CBI). In conjunction with the use of brain-to-computer interface (BCI) techniques that translate brain function to generate computer commands, we investigated the feasibility of using the FUS-based CBI to non-invasively establish a functional link between the brains of different species (i.e. human and Sprague-Dawley rat), thus creating a brain-to-brain interface (BBI). The implementation was aimed to non-invasively translate the human volunteer’s intention to stimulate a rat’s brain motor area that is responsible for the tail movement. The volunteer initiated the intention by looking at a strobe light flicker on a computer display, and the degree of synchronization in the electroencephalographic steady-state-visual-evoked-potentials (SSVEP) with respect to the strobe frequency was analyzed using a computer. Increased signal amplitude in the SSVEP, indicating the volunteer’s intention, triggered the delivery of a burst-mode FUS (350 kHz ultrasound frequency, tone burst duration of 0.5 ms, pulse repetition frequency of 1 kHz, given for 300 msec duration) to excite the motor area of an anesthetized rat transcranially. The successful excitation subsequently elicited the tail movement, which was detected by a motion sensor. The interface was achieved at 94.0±3.0% accuracy, with a time delay of 1.59±1.07 sec from the thought-initiation to the creation of the tail movement. Our results demonstrate the feasibility of a computer-mediated BBI that links central neural functions between two biological entities, which may confer unexplored opportunities in the study of neuroscience with potential implications for therapeutic applications.

Concepts: Brain, Neuroscience, Human brain, Frequency, Hertz, Sound, Amplitude, Strobe light

342

How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.

Concepts: Brain, Temporal lobe, Cerebrum, Primary auditory cortex, Superior temporal gyrus, Auditory system, Acoustics, Sound

340

The articular release of the metacarpophalangeal joint produces a typical cracking sound, resulting in what is commonly referred to as the cracking of knuckles. Despite over sixty years of research, the source of the knuckle cracking sound continues to be debated due to inconclusive experimental evidence as a result of limitations in the temporal resolution of non-invasive physiological imaging techniques. To support the available experimental data and shed light onto the source of the cracking sound, we have developed a mathematical model of the events leading to the generation of the sound. The model resolves the dynamics of a collapsing cavitation bubble in the synovial fluid inside a metacarpophalangeal joint during an articular release. The acoustic signature from the resulting bubble dynamics is shown to be consistent in both magnitude and dominant frequency with experimental measurements in the literature and with our own experiments, thus lending support for cavitation bubble collapse as the source of the cracking sound. Finally, the model also shows that only a partial collapse of the bubble is needed to replicate the experimentally observed acoustic spectra, thus allowing for bubbles to persist following the generation of sound as has been reported in recent experiments.

Concepts: Causality, Science, Experiment, Joints, Sound, Sonar, Replicate, Cavitation

266

Although brain imaging studies have demonstrated that listening to music alters human brain structure and function, the molecular mechanisms mediating those effects remain unknown. With the advent of genomics and bioinformatics approaches, these effects of music can now be studied in a more detailed fashion. To verify whether listening to classical music has any effect on human transcriptome, we performed genome-wide transcriptional profiling from the peripheral blood of participants after listening to classical music (n = 48), and after a control study without music exposure (n = 15). As musical experience is known to influence the responses to music, we compared the transcriptional responses of musically experienced and inexperienced participants separately with those of the controls. Comparisons were made based on two subphenotypes of musical experience: musical aptitude and music education. In musically experiencd participants, we observed the differential expression of 45 genes (27 up- and 18 down-regulated) and 97 genes (75 up- and 22 down-regulated) respectively based on subphenotype comparisons (rank product non-parametric statistics, pfp 0.05, >1.2-fold change over time across conditions). Gene ontological overrepresentation analysis (hypergeometric test, FDR < 0.05) revealed that the up-regulated genes are primarily known to be involved in the secretion and transport of dopamine, neuron projection, protein sumoylation, long-term potentiation and dephosphorylation. Down-regulated genes are known to be involved in ATP synthase-coupled proton transport, cytolysis, and positive regulation of caspase, peptidase and endopeptidase activities. One of the most up-regulated genes, alpha-synuclein (SNCA), is located in the best linkage region of musical aptitude on chromosome 4q22.1 and is regulated by GATA2, which is known to be associated with musical aptitude. Several genes reported to regulate song perception and production in songbirds displayed altered activities, suggesting a possible evolutionary conservation of sound perception between species. We observed no significant findings in musically inexperienced participants.

Concepts: DNA, Gene, Gene expression, Brain, Human brain, Experience, Philosophy of science, Sound

245

Recent neuroscience research suggests that tinnitus may reflect synaptic loss in the cochlea that does not express in the audiogram but leads to neural changes in auditory pathways that reduce sound level tolerance (SLT). Adolescents (N = 170) completed a questionnaire addressing their prior experience with tinnitus, potentially risky listening habits, and sensitivity to ordinary sounds, followed by psychoacoustic measurements in a sound booth. Among all adolescents 54.7% reported by questionnaire that they had previously experienced tinnitus, while 28.8% heard tinnitus in the booth. Psychoacoustic properties of tinnitus measured in the sound booth corresponded with those of chronic adult tinnitus sufferers. Neither hearing thresholds (≤15 dB HL to 16 kHz) nor otoacoustic emissions discriminated between adolescents reporting or not reporting tinnitus in the sound booth, but loudness discomfort levels (a psychoacoustic measure of SLT) did so, averaging 11.3 dB lower in adolescents experiencing tinnitus in the acoustic chamber. Although risky listening habits were near universal, the teenagers experiencing tinnitus and reduced SLT tended to be more protective of their hearing. Tinnitus and reduced SLT could be early indications of a vulnerability to hidden synaptic injury that is prevalent among adolescents and expressed following exposure to high level environmental sounds.

Concepts: Auditory system, Acoustics, Ultrasound, Ear, Sound, Audiogram, Hearing

218

The perception of the pitch of harmonic complex sounds is a crucial function of human audition, especially in music and speech processing. Whether the underlying mechanisms of pitch perception are unique to humans, however, is unknown. Based on estimates of frequency resolution at the level of the auditory periphery, psychoacoustic studies in humans have revealed several primary features of central pitch mechanisms. It has been shown that (i) pitch strength of a harmonic tone is dominated by resolved harmonics; (ii) pitch of resolved harmonics is sensitive to the quality of spectral harmonicity; and (iii) pitch of unresolved harmonics is sensitive to the salience of temporal envelope cues. Here we show, for a standard musical tuning fundamental frequency of 440 Hz, that the common marmoset (Callithrix jacchus), a New World monkey with a hearing range similar to that of humans, exhibits all of the primary features of central pitch mechanisms demonstrated in humans. Thus, marmosets and humans may share similar pitch perception mechanisms, suggesting that these mechanisms may have emerged early in primate evolution.

Concepts: Callithrix, Primate, Acoustics, Sound, Pitch, Music, Timbre, Musical tuning

212

In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner’s law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain.

Concepts: Cognitive science, Electroencephalography, Medical tests, Sound, Functional magnetic resonance imaging, Pitch, Weber–Fechner law, Scientific pitch notation

207

There is an increasing concern that anthropogenic noise could have a significant impact on the marine environment, but there is still insufficient data for most invertebrates. What do they perceive? We investigated this question in oysters Magallana gigas (Crassostrea gigas) using pure tone exposures, accelerometer fixed on the oyster shell and hydrophone in the water column. Groups of 16 oysters were exposed to quantifiable waterborne sinusoidal sounds in the range of 10 Hz to 20 kHz at various acoustic energies. The experiment was conducted in running seawater using an experimental flume equipped with suspended loudspeakers. The sensitivity of the oysters was measured by recording their valve movements by high-frequency noninvasive valvometry. The tests were 3 min tone exposures including a 70 sec fade-in period. Three endpoints were analysed: the ratio of responding individuals in the group, the resulting changes of valve opening amplitude and the response latency. At high enough acoustic energy, oysters transiently closed their valves in response to frequencies in the range of 10 to <1000 Hz, with maximum sensitivity from 10 to 200 Hz. The minimum acoustic energy required to elicit a response was 0.02 m∙s-2 at 122 dBrms re 1 μPa for frequencies ranging from 10 to 80 Hz. As a partial valve closure cannot be differentiated from a nociceptive response, it is very likely that oysters detect sounds at lower acoustic energy. The mechanism involved in sound detection and the ecological consequences are discussed.

Concepts: Sense, Frequency, Hertz, Pacific oyster, Ostreidae, Oyster, Sound, Pitch

194

We demonstrate a new optical approach to generate high-frequency (>15 MHz) and high-amplitude focused ultrasound, which can be used for non-invasive ultrasound therapy. A nano-composite film of carbon nanotubes (CNTs) and elastomeric polymer is formed on concave lenses, and used as an efficient optoacoustic source due to the high optical absorption of the CNTs and rapid heat transfer to the polymer upon excitation by pulsed laser irradiation. The CNT-coated lenses can generate unprecedented optoacoustic pressures of >50 MPa in peak positive on a tight focal spot of 75 μm in lateral and 400 μm in axial widths. This pressure amplitude is remarkably high in this frequency regime, producing pronounced shock effects and non-thermal pulsed cavitation at the focal zone. We demonstrate that the optoacoustic lens can be used for micro-scale ultrasonic fragmentation of solid materials and a single-cell surgery in terms of removing the cells from substrates and neighboring cells.

Concepts: Optics, Electromagnetic radiation, Ultrasound, Pressure, Sound, Elastomer, Sonar, Photographic lens

188

Sexual selection has resulted in sex-based size dimorphism in many mammals, including humans. In Western societies, average to taller stature men and comparatively shorter, slimmer women have higher reproductive success and are typically considered more attractive. This size dimorphism also extends to vocalisations in many species, again including humans, with larger individuals exhibiting lower formant frequencies than smaller individuals. Further, across many languages there are associations between phonemes and the expression of size (e.g. large /a, o/, small /i, e/), consistent with the frequency-size relationship in vocalisations. We suggest that naming preferences are a product of this frequency-size relationship, driving male names to sound larger and female names smaller, through sound symbolism. In a 10-year dataset of the most popular British, Australian and American names we show that male names are significantly more likely to contain larger sounding phonemes (e.g. “Thomas”), while female names are significantly more likely to contain smaller phonemes (e.g. “Emily”). The desire of parents to have comparatively larger, more masculine sons, and smaller, more feminine daughters, and the increased social success that accompanies more sex-stereotyped names, is likely to be driving English-language first names to exploit sound symbolism of size in line with sexual body size dimorphism.

Concepts: Human, Male, Reproduction, Female, Gender, Sound, Personal name, Norwegian language