Despite partial success, communication has remained impossible for persons suffering from complete motor paralysis but intact cognitive and emotional processing, a state called complete locked-in state (CLIS). Based on a motor learning theoretical context and on the failure of neuroelectric brain-computer interface (BCI) communication attempts in CLIS, we here report BCI communication using functional near-infrared spectroscopy (fNIRS) and an implicit attentional processing procedure. Four patients suffering from advanced amyotrophic lateral sclerosis (ALS)-two of them in permanent CLIS and two entering the CLIS without reliable means of communication-learned to answer personal questions with known answers and open questions all requiring a “yes” or “no” thought using frontocentral oxygenation changes measured with fNIRS. Three patients completed more than 46 sessions spread over several weeks, and one patient (patient W) completed 20 sessions. Online fNIRS classification of personal questions with known answers and open questions using linear support vector machine (SVM) resulted in an above-chance-level correct response rate over 70%. Electroencephalographic oscillations and electrooculographic signals did not exceed the chance-level threshold for correct communication despite occasional differences between the physiological signals representing a “yes” or “no” response. However, electroencephalogram (EEG) changes in the theta-frequency band correlated with inferior communication performance, probably because of decreased vigilance and attention. If replicated with ALS patients in CLIS, these positive results could indicate the first step towards abolition of complete locked-in states, at least for ALS.
Stories of g-tummo meditators mysteriously able to dry wet sheets wrapped around their naked bodies during a frigid Himalayan ceremony have intrigued scholars and laypersons alike for a century. Study 1 was conducted in remote monasteries of eastern Tibet with expert meditators performing g-tummo practices while their axillary temperature and electroencephalographic (EEG) activity were measured. Study 2 was conducted with Western participants (a non-meditator control group) instructed to use the somatic component of the g-tummo practice (vase breathing) without utilization of meditative visualization. Reliable increases in axillary temperature from normal to slight or moderate fever zone (up to 38.3°C) were observed among meditators only during the Forceful Breath type of g-tummo meditation accompanied by increases in alpha, beta, and gamma power. The magnitude of the temperature increases significantly correlated with the increases in alpha power during Forceful Breath meditation. The findings indicate that there are two factors affecting temperature increase. The first is the somatic component which causes thermogenesis, while the second is the neurocognitive component (meditative visualization) that aids in sustaining temperature increases for longer periods. Without meditative visualization, both meditators and non-meditators were capable of using the Forceful Breath vase breathing only for a limited time, resulting in limited temperature increases in the range of normal body temperature. Overall, the results suggest that specific aspects of the g-tummo technique might help non-meditators learn how to regulate their body temperature, which has implications for improving health and regulating cognitive performance.
Flexible, wearable sensing devices can yield important information about the underlying physiology of a human subject for applications in real-time health and fitness monitoring. Despite significant progress in the fabrication of flexible biosensors that naturally comply with the epidermis, most designs measure only a small number of physical or electrophysiological parameters, and neglect the rich chemical information available from biomarkers. Here, we introduce a skin-worn wearable hybrid sensing system that offers simultaneous real-time monitoring of a biochemical (lactate) and an electrophysiological signal (electrocardiogram), for more comprehensive fitness monitoring than from physical or electrophysiological sensors alone. The two sensing modalities, comprising a three-electrode amperometric lactate biosensor and a bipolar electrocardiogram sensor, are co-fabricated on a flexible substrate and mounted on the skin. Human experiments reveal that physiochemistry and electrophysiology can be measured simultaneously with negligible cross-talk, enabling a new class of hybrid sensing devices.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 4 years ago
Despite the fact that midday naps are characteristic of early childhood, very little is understood about the structure and function of these sleep bouts. Given that sleep benefits memory in young adults, it is possible that naps serve a similar function for young children. However, children transition from biphasic to monophasic sleep patterns in early childhood, eliminating the nap from their daily sleep schedule. As such, naps may contain mostly light sleep stages and serve little function for learning and memory during this transitional age. Lacking scientific understanding of the function of naps in early childhood, policy makers may eliminate preschool classroom nap opportunities due to increasing curriculum demands. Here we show evidence that classroom naps support learning in preschool children by enhancing memories acquired earlier in the day compared with equivalent intervals spent awake. This nap benefit is greatest for children who nap habitually, regardless of age. Performance losses when nap-deprived are not recovered during subsequent overnight sleep. Physiological recordings of naps support a role of sleep spindles in memory performance. These results suggest that distributed sleep is critical in early learning; when short-term memory stores are limited, memory consolidation must take place frequently.
We present, to our knowledge, the first demonstration that a non-invasive brain-to-brain interface (BBI) can be used to allow one human to guess what is on the mind of another human through an interactive question-and-answering paradigm similar to the “20 Questions” game. As in previous non-invasive BBI studies in humans, our interface uses electroencephalography (EEG) to detect specific patterns of brain activity from one participant (the “respondent”), and transcranial magnetic stimulation (TMS) to deliver functionally-relevant information to the brain of a second participant (the “inquirer”). Our results extend previous BBI research by (1) using stimulation of the visual cortex to convey visual stimuli that are privately experienced and consciously perceived by the inquirer; (2) exploiting real-time rather than off-line communication of information from one brain to another; and (3) employing an interactive task, in which the inquirer and respondent must exchange information bi-directionally to collaboratively solve the task. The results demonstrate that using the BBI, ten participants (five inquirer-respondent pairs) can successfully identify a “mystery item” using a true/false question-answering protocol similar to the “20 Questions” game, with high levels of accuracy that are significantly greater than a control condition in which participants were connected through a sham BBI.
In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner’s law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain.
To further test and explore the hypothesis that synchronous oscillatory brain activity supports interpersonally coordinated behavior during dyadic music performance, we simultaneously recorded the electroencephalogram (EEG) from the brains of each of 12 guitar duets repeatedly playing a modified Rondo in two voices by C.G. Scheidler. Indicators of phase locking and of within-brain and between-brain phase coherence were obtained from complex time-frequency signals based on the Gabor transform. Analyses were restricted to the delta (1-4 Hz) and theta (4-8 Hz) frequency bands. We found that phase locking as well as within-brain and between-brain phase-coherence connection strengths were enhanced at frontal and central electrodes during periods that put particularly high demands on musical coordination. Phase locking was modulated in relation to the experimentally assigned musical roles of leader and follower, corroborating the functional significance of synchronous oscillations in dyadic music performance. Graph theory analyses revealed within-brain and hyperbrain networks with small-worldness properties that were enhanced during musical coordination periods, and community structures encompassing electrodes from both brains (hyperbrain modules). We conclude that brain mechanisms indexed by phase locking, phase coherence, and structural properties of within-brain and hyperbrain networks support interpersonal action coordination (IAC).
BACKGROUND: Graph theory has been recently introduced to characterize complex brain networks, making it highly suitable to investigate altered connectivity in neurologic disorders. A current model proposes autism spectrum disorder (ASD) as a developmental disconnection syndrome, supported by converging evidence in both non-syndromic and syndromic ASD. However, the effects of abnormal connectivity on network properties have not been well studied, particularly in syndromic ASD. To close this gap, brain functional networks of electroencephalographic (EEG) connectivity were studied through graph measures in patients with Tuberous Sclerosis Complex (TSC), a disorder with a high prevalence of ASD, as well as in patients with non-syndromic ASD. METHODS: EEG data were collected from TSC patients with ASD (n = 14) and without ASD (n = 29), from patients with non-syndromic ASD (n = 16), and from controls (n = 46). First, EEG connectivity was characterized by the mean coherence, the ratio of inter- over intra-hemispheric coherence and the ratio of long- over short-range coherence. Next, graph measures of the functional networks were computed and a resilience analysis was conducted. To distinguish effects related to ASD from those related to TSC, a two-way analysis of covariance (ANCOVA) was applied, using age as a covariate. RESULTS: Analysis of network properties revealed differences specific to TSC and ASD, and these differences were very consistent across subgroups. In TSC, both with and without a concurrent diagnosis of ASD, mean coherence, global efficiency, and clustering coefficient were decreased and the average path length was increased. These findings indicate an altered network topology. In ASD, both with and without a concurrent diagnosis of TSC, decreased long- over short-range coherence and markedly increased network resilience were found. CONCLUSIONS: The altered network topology in TSC represents a functional correlate of structural abnormalities and may play a role in the pathogenesis of neurological deficits. The increased resilience in ASD may reflect an excessively degenerate network with local overconnection and decreased functional specialization. This joint study of TSC and ASD networks provides a unique window to common neurobiological mechanisms in autism. Please see related commentary article here http://www.biomedcentral.com/1741-7015/11/55.
Little is known about the spread of emotions beyond dyads. Yet, it is of importance for explaining the emergence of crowd behaviors. Here, we experimentally addressed whether emotional homogeneity within a crowd might result from a cascade of local emotional transmissions where the perception of another’s emotional expression produces, in the observer’s face and body, sufficient information to allow for the transmission of the emotion to a third party. We reproduced a minimal element of a crowd situation and recorded the facial electromyographic activity and the skin conductance response of an individual C observing the face of an individual B watching an individual A displaying either joy or fear full body expressions. Critically, individual B did not know that she was being watched. We show that emotions of joy and fear displayed by A were spontaneously transmitted to C through B, even when the emotional information available in B’s faces could not be explicitly recognized. These findings demonstrate that one is tuned to react to others' emotional signals and to unintentionally produce subtle but sufficient emotional cues to induce emotional states in others. This phenomenon could be the mark of a spontaneous cooperative behavior whose function is to communicate survival-value information to conspecifics.
For the perception of timbre of a musical instrument, the attack time is known to hold crucial information. The first 50 to 150 ms of sound onset reflect the excitation mechanism, which generates the sound. Since auditory processing and music perception in particular are known to be hampered in cochlear implant (CI) users, we conducted an electroencephalography (EEG) study with an oddball paradigm to evaluate the processing of small differences in musical sound onset. The first 60 ms of a cornet sound were manipulated in order to examine whether these differences are detected by CI users and normal-hearing controls (NH controls), as revealed by auditory evoked potentials (AEPs). Our analysis focused on the N1 as an exogenous component known to reflect physical stimuli properties as well as on the P2 and the Mismatch Negativity (MMN). Our results revealed different N1 latencies as well as P2 amplitudes and latencies for the onset manipulations in both groups. An MMN could be elicited only in the NH control group. Together with additional findings that suggest an impact of musical training on CI users' AEPs, our findings support the view that impaired timbre perception in CI users is at partly due to altered sound onset feature detection.