Concept: Evoked potential
Despite partial success, communication has remained impossible for persons suffering from complete motor paralysis but intact cognitive and emotional processing, a state called complete locked-in state (CLIS). Based on a motor learning theoretical context and on the failure of neuroelectric brain-computer interface (BCI) communication attempts in CLIS, we here report BCI communication using functional near-infrared spectroscopy (fNIRS) and an implicit attentional processing procedure. Four patients suffering from advanced amyotrophic lateral sclerosis (ALS)-two of them in permanent CLIS and two entering the CLIS without reliable means of communication-learned to answer personal questions with known answers and open questions all requiring a “yes” or “no” thought using frontocentral oxygenation changes measured with fNIRS. Three patients completed more than 46 sessions spread over several weeks, and one patient (patient W) completed 20 sessions. Online fNIRS classification of personal questions with known answers and open questions using linear support vector machine (SVM) resulted in an above-chance-level correct response rate over 70%. Electroencephalographic oscillations and electrooculographic signals did not exceed the chance-level threshold for correct communication despite occasional differences between the physiological signals representing a “yes” or “no” response. However, electroencephalogram (EEG) changes in the theta-frequency band correlated with inferior communication performance, probably because of decreased vigilance and attention. If replicated with ALS patients in CLIS, these positive results could indicate the first step towards abolition of complete locked-in states, at least for ALS.
To further test and explore the hypothesis that synchronous oscillatory brain activity supports interpersonally coordinated behavior during dyadic music performance, we simultaneously recorded the electroencephalogram (EEG) from the brains of each of 12 guitar duets repeatedly playing a modified Rondo in two voices by C.G. Scheidler. Indicators of phase locking and of within-brain and between-brain phase coherence were obtained from complex time-frequency signals based on the Gabor transform. Analyses were restricted to the delta (1-4 Hz) and theta (4-8 Hz) frequency bands. We found that phase locking as well as within-brain and between-brain phase-coherence connection strengths were enhanced at frontal and central electrodes during periods that put particularly high demands on musical coordination. Phase locking was modulated in relation to the experimentally assigned musical roles of leader and follower, corroborating the functional significance of synchronous oscillations in dyadic music performance. Graph theory analyses revealed within-brain and hyperbrain networks with small-worldness properties that were enhanced during musical coordination periods, and community structures encompassing electrodes from both brains (hyperbrain modules). We conclude that brain mechanisms indexed by phase locking, phase coherence, and structural properties of within-brain and hyperbrain networks support interpersonal action coordination (IAC).
Integration of local elements into a coherent global form is a fundamental aspect of visual object recognition. How the different hierarchically organized stages of visual analysis develop in order to support object representation in infants remains unknown. The aim of this study was to investigate structural encoding of natural images in 4- to 6-month-old infants and adults. We used the steady-state visual evoked potential (ssVEP) technique to measure cortical responses specific to the global structure present in object and face images, and assessed whether differential responses were present for these image categories. This study is the first to apply the ssVEP method to high-level vision in infants. Infants and adults responded to the structural relations present in both image categories, and topographies of the responses differed based on image category. However, while adult responses to face and object structure were localized over occipitotemporal scalp areas, only infant face responses were distributed over temporal regions. Therefore, both infants and adults show object category specificity in their neural responses. The topography of the infant response distributions indicates that between 4 and 6 months of age, structure encoding of faces occurs at a higher level of processing than that of objects.
For the perception of timbre of a musical instrument, the attack time is known to hold crucial information. The first 50 to 150 ms of sound onset reflect the excitation mechanism, which generates the sound. Since auditory processing and music perception in particular are known to be hampered in cochlear implant (CI) users, we conducted an electroencephalography (EEG) study with an oddball paradigm to evaluate the processing of small differences in musical sound onset. The first 60 ms of a cornet sound were manipulated in order to examine whether these differences are detected by CI users and normal-hearing controls (NH controls), as revealed by auditory evoked potentials (AEPs). Our analysis focused on the N1 as an exogenous component known to reflect physical stimuli properties as well as on the P2 and the Mismatch Negativity (MMN). Our results revealed different N1 latencies as well as P2 amplitudes and latencies for the onset manipulations in both groups. An MMN could be elicited only in the NH control group. Together with additional findings that suggest an impact of musical training on CI users' AEPs, our findings support the view that impaired timbre perception in CI users is at partly due to altered sound onset feature detection.
Peripheral electrical stimulation (PES) is a common clinical technique known to induce changes in corticomotor excitability; PES applied to induce a tetanic motor contraction increases, and PES at sub-motor threshold (sensory) intensities decreases, corticomotor excitability. Understanding of the mechanisms underlying these opposite changes in corticomotor excitability remains elusive. Modulation of primary sensory cortex (S1) excitability could underlie altered corticomotor excitability with PES. Here we examined whether changes in primary sensory (S1) and motor (M1) cortex excitability follow the same time-course when PES is applied using identical stimulus parameters. Corticomotor excitability was measured using transcranial magnetic stimulation (TMS) and sensory cortex excitability using somatosensory evoked potentials (SEPs) before and after 30 min of PES to right abductor pollicis brevis (APB). Two PES paradigms were tested in separate sessions; PES sufficient to induce a tetanic motor contraction (30-50 Hz; strong motor intensity) and PES at sub motor-threshold intensity (100 Hz). PES applied to induce strong activation of APB increased the size of the N(20)-P(25) component, thought to reflect sensory processing at cortical level, and increased corticomotor excitability. PES at sensory intensity decreased the size of the P25-N33 component and reduced corticomotor excitability. A positive correlation was observed between the changes in amplitude of the cortical SEP components and corticomotor excitability following sensory and motor PES. Sensory PES also increased the sub-cortical P(14)-N(20) SEP component. These findings provide evidence that PES results in co-modulation of S1 and M1 excitability, possibly due to cortico-cortical projections between S1 and M1. This mechanism may underpin changes in corticomotor excitability in response to afferent input generated by PES.
The idea of motor resonance was born at the time that it was demonstrated that cortical and spinal pathways of the motor system are specifically activated during both action-observation and execution. What is not known is if the human action observation-execution matching system simulates actions through motor representations specifically attuned to the laterality of the observed effectors (i.e., effector-dependent representations) or through abstract motor representations unconnected to the observed effector (i.e., effector-independent representations). To answer that question we need to know how the information necessary for motor resonance is represented or integrated within the representation of an effector. Transcranial magnetic stimulation (TMS)-induced motor evoked potentials (MEPs) were thus recorded from the dominant and non-dominant hands of left- and right-handed participants while they observed a left- or a right-handed model grasping an object. The anatomical correspondence between the effector being observed and the observer’s effector classically reported in the literature was confirmed by the MEP response in the dominant hand of participants observing models with their same hand preference. This effect was found in both left- as well as in right-handers. When a broader spectrum of options, such as actions performed by a model with a different hand preference, was instead considered, that correspondence disappeared. Motor resonance was noted in the observer’s dominant effector regardless of the laterality of the hand being observed. This would indicate that there is a more sophisticated mechanism which works to convert someone else’s pattern of movement into the observer’s optimal motor commands and that effector-independent representations specifically modulate motor resonance.
Schizophrenia patients exhibit deficits on visual processing tasks, including visual backward masking, and these impairments are related to deficits in higher-level processes. In the current study we used electroencephalography techniques to examine successive stages and pathways of visual processing in a specialized masking paradigm, four-dot masking, which involves masking by object substitution. Seventy-six schizophrenia patients and 66 healthy controls had event-related potentials (ERPs) recorded during four-dot masking. Target visibility was manipulated by changing stimulus onset asynchrony (SOA) between the target and mask, such that performance decreased with increasing SOA. Three SOAs were used: 0, 50, and 100 ms. The P100 and N100 perceptual ERPs were examined. Additionally, the visual awareness negativity (VAN) to correct vs. incorrect responses, an index of reentrant processing, was examined for SOAs 50 and 100 ms. Results showed that patients performed worse than controls on the behavioral task across all SOAs. The ERP results revealed that patients had significantly smaller P100 and N100 amplitudes, though there was no effect of SOA on either component in either group. In healthy controls, but not patients, N100 amplitude correlated significantly with behavioral performance at SOAs where masking occurred, such that higher accuracy correlated with a larger N100. Healthy controls, but not patients, exhibited a larger VAN to correct vs. incorrect responses. The results indicate that the N100 appears to be related to attentional effort in the task in controls, but not patients. Considering that the VAN is thought to reflect reentrant processing, one interpretation of the findings is that patients' lack of VAN response and poorer performance may be related to dysfunctional reentrant processing.
- The Journal of neuroscience : the official journal of the Society for Neuroscience
- Published about 2 years ago
Due to capacity limits on perception, conditions of high perceptual load lead to reduced processing of unattended stimuli (Lavie et al., 2014). Accumulating work demonstrates the effects of visual perceptual load on visual cortex responses, but the effects on auditory processing remain poorly understood. Here we establish the neural mechanisms underlying “inattentional deafness”-the failure to perceive auditory stimuli under high visual perceptual load. Participants performed a visual search task of low (target dissimilar to nontarget items) or high (target similar to nontarget items) load. On a random subset (50%) of trials, irrelevant tones were presented concurrently with the visual stimuli. Brain activity was recorded with magnetoencephalography, and time-locked responses to the visual search array and to the incidental presence of unattended tones were assessed. High, compared to low, perceptual load led to increased early visual evoked responses (within 100 ms from onset). This was accompanied by reduced early (∼100 ms from tone onset) auditory evoked activity in superior temporal sulcus and posterior middle temporal gyrus. A later suppression of the P3 “awareness” response to the tones was also observed under high load. A behavioral experiment revealed reduced tone detection sensitivity under high visual load, indicating that the reduction in neural responses was indeed associated with reduced awareness of the sounds. These findings support a neural account of shared audiovisual resources, which, when depleted under load, leads to failures of sensory perception and awareness.
The P300 event-related potential is a well-known pattern in the electroencephalogram (EEG). This kind of brain signal is used for many different brain-computer interface (BCI) applications, e.g., spellers, environmental controllers, web browsers, or for painting. In recent times, BCI systems are mature enough to leave the laboratories to be used by the end-users, namely severely disabled people. Therefore, new challenges arise and the systems should be implemented and evaluated according to user-centered design (USD) guidelines. We developed and implemented a new system that utilizes the P300 pattern to compose music. Our Brain Composing system consists of three parts: the EEG acquisition device, the P300-based BCI, and the music composing software. Seventeen musical participants and one professional composer performed a copy-spelling, a copy-composing, and a free-composing task with the system. According to the USD guidelines, we investigated the efficiency, the effectiveness and subjective criteria in terms of satisfaction, enjoyment, frustration, and attractiveness. The musical participants group achieved high average accuracies: 88.24% (copy-spelling), 88.58% (copy-composing), and 76.51% (free-composing). The professional composer achieved also high accuracies: 100% (copy-spelling), 93.62% (copy-composing), and 98.20% (free-composing). General results regarding the subjective criteria evaluation were that the participants enjoyed the usage of the Brain Composing system and were highly satisfied with the system. Showing very positive results with healthy people in this study, this was the first step towards a music composing system for severely disabled people.
We have developed an asynchronous brain-machine interface (BMI)-based lower limb exoskeleton control system based on steady-state visual evoked potentials (SSVEPs).