Transcranial focused ultrasound (FUS) is capable of modulating the neural activity of specific brain regions, with a potential role as a non-invasive computer-to-brain interface (CBI). In conjunction with the use of brain-to-computer interface (BCI) techniques that translate brain function to generate computer commands, we investigated the feasibility of using the FUS-based CBI to non-invasively establish a functional link between the brains of different species (i.e. human and Sprague-Dawley rat), thus creating a brain-to-brain interface (BBI). The implementation was aimed to non-invasively translate the human volunteer’s intention to stimulate a rat’s brain motor area that is responsible for the tail movement. The volunteer initiated the intention by looking at a strobe light flicker on a computer display, and the degree of synchronization in the electroencephalographic steady-state-visual-evoked-potentials (SSVEP) with respect to the strobe frequency was analyzed using a computer. Increased signal amplitude in the SSVEP, indicating the volunteer’s intention, triggered the delivery of a burst-mode FUS (350 kHz ultrasound frequency, tone burst duration of 0.5 ms, pulse repetition frequency of 1 kHz, given for 300 msec duration) to excite the motor area of an anesthetized rat transcranially. The successful excitation subsequently elicited the tail movement, which was detected by a motion sensor. The interface was achieved at 94.0±3.0% accuracy, with a time delay of 1.59±1.07 sec from the thought-initiation to the creation of the tail movement. Our results demonstrate the feasibility of a computer-mediated BBI that links central neural functions between two biological entities, which may confer unexplored opportunities in the study of neuroscience with potential implications for therapeutic applications.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
The articular release of the metacarpophalangeal joint produces a typical cracking sound, resulting in what is commonly referred to as the cracking of knuckles. Despite over sixty years of research, the source of the knuckle cracking sound continues to be debated due to inconclusive experimental evidence as a result of limitations in the temporal resolution of non-invasive physiological imaging techniques. To support the available experimental data and shed light onto the source of the cracking sound, we have developed a mathematical model of the events leading to the generation of the sound. The model resolves the dynamics of a collapsing cavitation bubble in the synovial fluid inside a metacarpophalangeal joint during an articular release. The acoustic signature from the resulting bubble dynamics is shown to be consistent in both magnitude and dominant frequency with experimental measurements in the literature and with our own experiments, thus lending support for cavitation bubble collapse as the source of the cracking sound. Finally, the model also shows that only a partial collapse of the bubble is needed to replicate the experimentally observed acoustic spectra, thus allowing for bubbles to persist following the generation of sound as has been reported in recent experiments.
Although brain imaging studies have demonstrated that listening to music alters human brain structure and function, the molecular mechanisms mediating those effects remain unknown. With the advent of genomics and bioinformatics approaches, these effects of music can now be studied in a more detailed fashion. To verify whether listening to classical music has any effect on human transcriptome, we performed genome-wide transcriptional profiling from the peripheral blood of participants after listening to classical music (n = 48), and after a control study without music exposure (n = 15). As musical experience is known to influence the responses to music, we compared the transcriptional responses of musically experienced and inexperienced participants separately with those of the controls. Comparisons were made based on two subphenotypes of musical experience: musical aptitude and music education. In musically experiencd participants, we observed the differential expression of 45 genes (27 up- and 18 down-regulated) and 97 genes (75 up- and 22 down-regulated) respectively based on subphenotype comparisons (rank product non-parametric statistics, pfp 0.05, >1.2-fold change over time across conditions). Gene ontological overrepresentation analysis (hypergeometric test, FDR < 0.05) revealed that the up-regulated genes are primarily known to be involved in the secretion and transport of dopamine, neuron projection, protein sumoylation, long-term potentiation and dephosphorylation. Down-regulated genes are known to be involved in ATP synthase-coupled proton transport, cytolysis, and positive regulation of caspase, peptidase and endopeptidase activities. One of the most up-regulated genes, alpha-synuclein (SNCA), is located in the best linkage region of musical aptitude on chromosome 4q22.1 and is regulated by GATA2, which is known to be associated with musical aptitude. Several genes reported to regulate song perception and production in songbirds displayed altered activities, suggesting a possible evolutionary conservation of sound perception between species. We observed no significant findings in musically inexperienced participants.
Recent neuroscience research suggests that tinnitus may reflect synaptic loss in the cochlea that does not express in the audiogram but leads to neural changes in auditory pathways that reduce sound level tolerance (SLT). Adolescents (N = 170) completed a questionnaire addressing their prior experience with tinnitus, potentially risky listening habits, and sensitivity to ordinary sounds, followed by psychoacoustic measurements in a sound booth. Among all adolescents 54.7% reported by questionnaire that they had previously experienced tinnitus, while 28.8% heard tinnitus in the booth. Psychoacoustic properties of tinnitus measured in the sound booth corresponded with those of chronic adult tinnitus sufferers. Neither hearing thresholds (≤15 dB HL to 16 kHz) nor otoacoustic emissions discriminated between adolescents reporting or not reporting tinnitus in the sound booth, but loudness discomfort levels (a psychoacoustic measure of SLT) did so, averaging 11.3 dB lower in adolescents experiencing tinnitus in the acoustic chamber. Although risky listening habits were near universal, the teenagers experiencing tinnitus and reduced SLT tended to be more protective of their hearing. Tinnitus and reduced SLT could be early indications of a vulnerability to hidden synaptic injury that is prevalent among adolescents and expressed following exposure to high level environmental sounds.
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 2 years ago
The perception of the pitch of harmonic complex sounds is a crucial function of human audition, especially in music and speech processing. Whether the underlying mechanisms of pitch perception are unique to humans, however, is unknown. Based on estimates of frequency resolution at the level of the auditory periphery, psychoacoustic studies in humans have revealed several primary features of central pitch mechanisms. It has been shown that (i) pitch strength of a harmonic tone is dominated by resolved harmonics; (ii) pitch of resolved harmonics is sensitive to the quality of spectral harmonicity; and (iii) pitch of unresolved harmonics is sensitive to the salience of temporal envelope cues. Here we show, for a standard musical tuning fundamental frequency of 440 Hz, that the common marmoset (Callithrix jacchus), a New World monkey with a hearing range similar to that of humans, exhibits all of the primary features of central pitch mechanisms demonstrated in humans. Thus, marmosets and humans may share similar pitch perception mechanisms, suggesting that these mechanisms may have emerged early in primate evolution.
In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner’s law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain.
We demonstrate a new optical approach to generate high-frequency (>15 MHz) and high-amplitude focused ultrasound, which can be used for non-invasive ultrasound therapy. A nano-composite film of carbon nanotubes (CNTs) and elastomeric polymer is formed on concave lenses, and used as an efficient optoacoustic source due to the high optical absorption of the CNTs and rapid heat transfer to the polymer upon excitation by pulsed laser irradiation. The CNT-coated lenses can generate unprecedented optoacoustic pressures of >50 MPa in peak positive on a tight focal spot of 75 μm in lateral and 400 μm in axial widths. This pressure amplitude is remarkably high in this frequency regime, producing pronounced shock effects and non-thermal pulsed cavitation at the focal zone. We demonstrate that the optoacoustic lens can be used for micro-scale ultrasonic fragmentation of solid materials and a single-cell surgery in terms of removing the cells from substrates and neighboring cells.
BACKGROUND: Like human infants, songbirds learn their species-specific vocalizations through imitation learning. The birdsong system has emerged as a widely used experimental animal model for understanding the underlying neural mechanisms responsible for vocal production learning. However, how neural impulses are translated into precise motor behavior of the complex vocal organ (syrinx) to create song is poorly understood. First and foremost, we lack a detailed understanding of syringeal morphology. RESULTS: To fill this gap we combined non-invasive (high-field magnetic resonance imaging and micro-computed tomography) and invasive techniques (histology and micro-dissection) to construct the annotated high-resolution three-dimensional (3D) dataset, or morphome, of the zebra finch (Taeniopygia guttata) syrinx. We identified and annotated syringeal cartilage, bone, and musculature in situ in unprecedented detail. e provide interactive 3D models that greatly improve the communication of complex morphological data and of our understanding of syringeal function in general. CONCLUSIONS: Our results show that the syringeal skeleton is optimized for low weight driven by physiological constraints on song production. The present refinement of muscle organization and identity elucidates how apposed muscles actuate different syringeal elements. Our dataset allows for more precise predictions about muscle co-activation and synergies and has important implications for muscle activity and stimulation experiments. We also demonstrate how the syrinx can be stabilized during song to reduce mechanical noise and, as such, enhance repetitive execution of stereotypic motor patterns. In addition, we identify a cartilaginous structure suited to play a crucial role in the uncoupling of sound frequency and amplitude control, which permits a novel explanation to the evolutionary success of songbirds.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound’s physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.