Concept: Brain-computer interfacing
Direct brain control of overground walking in those with paraplegia due to spinal cord injury (SCI) has not been achieved. Invasive brain-computer interfaces (BCIs) may provide a permanent solution to this problem by directly linking the brain to lower extremity prostheses. To justify the pursuit of such invasive systems, the feasibility of BCI controlled overground walking should first be established in a noninvasive manner. To accomplish this goal, we developed an electroencephalogram (EEG)-based BCI to control a functional electrical stimulation (FES) system for overground walking and assessed its performance in an individual with paraplegia due to SCI.
Objective. At the balanced intersection of human and machine adaptation is found the optimally functioning brain-computer interface (BCI). In this study, we report a novel experiment of BCI controlling a robotic quadcopter in three-dimensional (3D) physical space using noninvasive scalp electroencephalogram (EEG) in human subjects. We then quantify the performance of this system using metrics suitable for asynchronous BCI. Lastly, we examine the impact that the operation of a real world device has on subjects' control in comparison to a 2D virtual cursor task. Approach. Five human subjects were trained to modulate their sensorimotor rhythms to control an AR Drone navigating a 3D physical space. Visual feedback was provided via a forward facing camera on the hull of the drone. Main results. Individual subjects were able to accurately acquire up to 90.5% of all valid targets presented while travelling at an average straight-line speed of 0.69 m s(-1). Significance. Freely exploring and interacting with the world around us is a crucial element of autonomy that is lost in the context of neurodegenerative disease. Brain-computer interfaces are systems that aim to restore or enhance a user’s ability to interact with the environment via a computer and through the use of only thought. We demonstrate for the first time the ability to control a flying robot in 3D physical space using noninvasive scalp recorded EEG in humans. Our work indicates the potential of noninvasive EEG-based BCI systems for accomplish complex control in 3D physical space. The present study may serve as a framework for the investigation of multidimensional noninvasive BCI control in a physical environment using telepresence robotics.
Detecting event related potentials (ERPs) from single trials is critical to the operation of many stimulus-driven brain computer interface (BCI) systems. The low strength of the ERP signal compared to the noise (due to artifacts and BCI irrelevant brain processes) makes this a challenging signal detection problem. Previous work has tended to focus on how best to detect a single ERP type (such as the visual oddball response). However, the underlying ERP detection problem is essentially the same regardless of stimulus modality (e.g. visual or tactile), ERP component (e.g. P300 oddball response, or the error-potential), measurement system or electrode layout. To investigate whether a single ERP detection method might work for a wider range of ERP BCIs we compare detection performance over a large corpus of more than 50 ERP BCI datasets whilst systematically varying the electrode montage, spectral filter, spatial filter and classifier training methods. We identify an interesting interaction between spatial whitening and regularised classification which made detection performance independent of the choice of spectral filter low-pass frequency. Our results show that pipeline consisting of spectral filtering, spatial whitening, and regularised classification gives near maximal performance in all cases. Importantly, this pipeline is simple to implement and completely automatic with no expert feature selection or parameter tuning required. Thus, we recommend this combination as a “best-practice” method for ERP detection problems.
Objective. Sensorimotor rhythms (SMRs) are 8-30 Hz oscillations in the electroencephalogram (EEG) recorded from the scalp over sensorimotor cortex that change with movement and/or movement imagery. Many brain-computer interface (BCI) studies have shown that people can learn to control SMR amplitudes and can use that control to move cursors and other objects in one, two or three dimensions. At the same time, if SMR-based BCIs are to be useful for people with neuromuscular disabilities, their accuracy and reliability must be improved substantially. These BCIs often use spatial filtering methods such as common average reference (CAR), Laplacian (LAP) filter or common spatial pattern (CSP) filter to enhance the signal-to-noise ratio of EEG. Here, we test the hypothesis that a new filter design, called an ‘adaptive Laplacian (ALAP) filter’, can provide better performance for SMR-based BCIs. Approach. An ALAP filter employs a Gaussian kernel to construct a smooth spatial gradient of channel weights and then simultaneously seeks the optimal kernel radius of this spatial filter and the regularization parameter of linear ridge regression. This optimization is based on minimizing the leave-one-out cross-validation error through a gradient descent method and is computationally feasible. Main results. Using a variety of kinds of BCI data from a total of 22 individuals, we compare the performances of ALAP filter to CAR, small LAP, large LAP and CSP filters. With a large number of channels and limited data, ALAP performs significantly better than CSP, CAR, small LAP and large LAP both in classification accuracy and in mean-squared error. Using fewer channels restricted to motor areas, ALAP is still superior to CAR, small LAP and large LAP, but equally matched to CSP. Significance. Thus, ALAP may help to improve the accuracy and robustness of SMR-based BCIs.
Embedded Grey Relation Theory in Hopfield Neural Network: Application to Motor Imagery EEG Recognition
- Clinical EEG and neuroscience : official journal of the EEG and Clinical Neuroscience Society (ENCS)
- Published about 8 years ago
In this study, grey-based Hopfield neural network (GHNN), is proposed for the unsupervised analysis of motor imagery (MI) electroencephalogram (EEG) data. Combined with segment selection and feature extraction, GHNN is used for the recognition of left and right MI data. A Gaussian-like filter is proposed to reduce noise, to further enhance performance of active segment selection. Features are extracted by coherence from wavelet data, and then discriminated by GHNN, which is an unsupervised approach suitable for the online classification of nonstationary biomedical signals. Compared to EEG data without segment selection, several usual features, and classifiers, the proposed system is potentially an analytic approach in brain-computer interface (BCI) applications.
Recently, brain-computer interfaces (BCIs) based on visual evoked potentials (VEPs) have been shown to achieve remarkable communication speeds. As they use electroencephalography (EEG) as non-invasive method for recording neural signals, the application of gel-based EEG is time-consuming and cumbersome. In order to achieve a more user-friendly system, this work explores the usability of dry EEG electrodes with a VEP-based BCI. While the results show a high variability between subjects, they also show that communication speeds of more than 100 bit/min are possible using dry EEG electrodes. To reduce performance variability and deal with the lower signal-to-noise ratio of the dry EEG electrodes, an averaging method and a dynamic stopping method were introduced to the BCI system. Those changes were shown to improve performance significantly, leading to an average classification accuracy of 76% with an average communication speed of 46 bit/min, which is equivalent to a writing speed of 8.8 error-free letters per minute. Although the BCI system works substantially better with gel-based EEG, dry EEG electrodes are more user-friendly and still allow high-speed BCI communication.
Electroencephalography (EEG) allows the study of the brain-behavior relationship in humans. Most of what we have learned with EEG was through observing the brain-behavior relationship under well-controlled laboratory conditions. However, by reducing “normal” behavior to a minimum the ecological validity of the results can be limited. Recent developments toward mobile EEG solutions allow to study the brain-behavior relationship outside the laboratory in more natural situations. Besides mobility and robustness with respect to motion, mobile EEG systems should also interfere as little as possible with the participant’s behavior. For example, natural interaction with other people could be hindered when it is obvious that a participant wears an EEG cap. This study evaluates the signal quality obtained with an unobtrusive solution for EEG monitoring through the integration of miniaturized EEG ton-electrodes into both a discreet baseball cap and an individualized ear piece. We show that such mini electrodes located at scalp and ear locations can reliably record event related potentials in a P300 brain-computer-interface application.
Despite technical advances in brain machine interfaces (BMI), for as-yet unknown reasons the ability to control a BMI remains limited to a subset of users. We investigate whether individual differences in BMI control based on motor imagery (MI) are related to differences in MI ability. We assessed whether differences in kinesthetic and visual MI, in the behavioral accuracy of MI, and in electroencephalographic variables, were able to differentiate between high- versus low-aptitude BMI users. High-aptitude BMI users showed higher MI accuracy as captured by subjective and behavioral measurements, pointing to a prominent role of kinesthetic rather than visual imagery. Additionally, for the first time, we applied mental chronometry, a measure quantifying the degree to which imagined and executed movements share a similar temporal profile. We also identified enhanced lateralized μ-band oscillations over sensorimotor cortices during MI in high- versus low-aptitude BMI users. These findings reveal that subjective, behavioral, and EEG measurements of MI are intimately linked to BMI control. We propose that poor BMI control cannot be ascribed only to intrinsic limitations of EEG recordings and that specific questionnaires and mental chronometry can be used as predictors of BMI performance (without the need to record EEG activity).
Background. Previous work has demonstrated that a commercial gaming electroencephalography (EEG) system, Emotiv EPOC, can be adjusted to provide valid auditory event-related potentials (ERPs) in adults that are comparable to ERPs recorded by a research-grade EEG system, Neuroscan. The aim of the current study was to determine if the same was true for children. Method. An adapted Emotiv EPOC system and Neuroscan system were used to make simultaneous EEG recordings in nineteen 6- to 12-year-old children under “passive” and “active” listening conditions. In the passive condition, children were instructed to watch a silent DVD and ignore 566 standard (1,000 Hz) and 100 deviant (1,200 Hz) tones. In the active condition, they listened to the same stimuli, and were asked to count the number of ‘high’ (i.e., deviant) tones. Results. Intraclass correlations (ICCs) indicated that the ERP morphology recorded with the two systems was very similar for the P1, N1, P2, N2, and P3 ERP peaks (r = .82 to .95) in both passive and active conditions, and less so, though still strong, for mismatch negativity ERP component (MMN; r = .67 to .74). There were few differences between peak amplitude and latency estimates for the two systems. Conclusions. An adapted EPOC EEG system can be used to index children’s late auditory ERP peaks (i.e., P1, N1, P2, N2, P3) and their MMN ERP component.
Brain-Computer Interfaces (BCIs) allow users to control devices and communicate by using brain activity only. BCIs based on broad-band visual stimulation can outperform BCIs using other stimulation paradigms. Visual stimulation with pseudo-random bit-sequences evokes specific Broad-Band Visually Evoked Potentials (BBVEPs) that can be reliably used in BCI for high-speed communication in speller applications. In this study, we report a novel paradigm for a BBVEP-based BCI that utilizes a generative framework to predict responses to broad-band stimulation sequences. In this study we designed a BBVEP-based BCI using modulated Gold codes to mark cells in a visual speller BCI. We defined a linear generative model that decomposes full responses into overlapping single-flash responses. These single-flash responses are used to predict responses to novel stimulation sequences, which in turn serve as templates for classification. The linear generative model explains on average 50% and up to 66% of the variance of responses to both seen and unseen sequences. In an online experiment, 12 participants tested a 6 × 6 matrix speller BCI. On average, an online accuracy of 86% was reached with trial lengths of 3.21 seconds. This corresponds to an Information Transfer Rate of 48 bits per minute (approximately 9 symbols per minute). This study indicates the potential to model and predict responses to broad-band stimulation. These predicted responses are proven to be well-suited as templates for a BBVEP-based BCI, thereby enabling communication and control by brain activity only.