Journal: Journal of neural engineering
Research in the area of transcranial electrical stimulation (TES) often relies on computational models of current flow in the brain. Models are built based on magnetic resonance images (MRI) of the human head to capture detailed individual anatomy. To simulate current flow on an individual, the subject’s MRI is segmented, virtual electrodes are placed on this anatomical model, the volume is tessellated into a mesh, and a finite element model (FEM) is solved numerically to estimate the current flow. Various software tools are available for each of these steps, as well as processing pipelines that connect these tools for automated or semi-automated processing. The goal of the present tool – ROAST – is to provide an end-to-end pipeline that can automatically process individual heads with realistic volumetric anatomy leveraging open-source software and custom scripts to improve segmentation and execute electrode placement. Approach: ROAST combines the segmentation algorithm of SPM8, a Matlab script for touch-up and automatic electrode placement, the finite element mesher iso2mesh and solver getDP. We compared its performance with commercial FEM software, and SimNIBS, a well-established open-source modeling pipeline. Main Results: The electric fields estimated with ROAST differ little from the results obtained with commercial meshing and solving software. We also do not find large differences between the various automated segmentation methods used by ROAST and SimNIBS. We do find bigger differences when volumetric segmentation are converted into surfaces in SimNIBS. However, evaluation on intracranial recordings from human subjects suggests that ROAST and SimNIBS are not significantly different in predicting field distribution, provided that users have detailed knowledge of SimNIBS. Significance: We hope that the detailed comparisons presented here of various choices in this modeling pipeline can provide guidance for future tool development. We released ROAST as an open-source, easy-to-install and fully-automated pipeline for individualized TES modeling.
Objective. In a previous study we demonstrated continuous translation, orientation and one-dimensional grasping control of a prosthetic limb (seven degrees of freedom) by a human subject with tetraplegia using a brain-machine interface (BMI). The current study, in the same subject, immediately followed the previous work and expanded the scope of the control signal by also extracting hand-shape commands from the two 96-channel intracortical electrode arrays implanted in the subject’s left motor cortex. Approach. Four new control signals, dictating prosthetic hand shape, replaced the one-dimensional grasping in the previous study, allowing the subject to control the prosthetic limb with ten degrees of freedom (three-dimensional (3D) translation, 3D orientation, four-dimensional hand shaping) simultaneously. Main results. Robust neural tuning to hand shaping was found, leading to ten-dimensional (10D) performance well above chance levels in all tests. Neural unit preferred directions were broadly distributed through the 10D space, with the majority of units significantly tuned to all ten dimensions, instead of being restricted to isolated domains (e.g. translation, orientation or hand shape). The addition of hand shaping emphasized object-interaction behavior. A fundamental component of BMIs is the calibration used to associate neural activity to intended movement. We found that the presence of an object during calibration enhanced successful shaping of the prosthetic hand as it closed around the object during grasping. Significance. Our results show that individual motor cortical neurons encode many parameters of movement, that object interaction is an important factor when extracting these signals, and that high-dimensional operation of prosthetic devices can be achieved with simple decoding algorithms. ClinicalTrials.gov Identifier: NCT01364480.
We used native sensorimotor representations of fingers in a brain-machine interface (BMI) to achieve immediate online control of individual prosthetic fingers.
Objective. Neural interface technology suitable for clinical translation has the potential to significantly impact the lives of amputees, spinal cord injury victims and those living with severe neuromotor disease. Such systems must be chronically safe, durable and effective. Approach. We have designed and implemented a neural interface microsystem, housed in a compact, subcutaneous and hermetically sealed titanium enclosure. The implanted device interfaces the brain with a 510k-approved, 100-element silicon-based microelectrode array via a custom hermetic feedthrough design. Full spectrum neural signals were amplified (0.1 Hz to 7.8 kHz, 200× gain) and multiplexed by a custom application specific integrated circuit, digitized and then packaged for transmission. The neural data (24 Mbps) were transmitted by a wireless data link carried on a frequency-shift-key-modulated signal at 3.2 and 3.8 GHz to a receiver 1 m away by design as a point-to-point communication link for human clinical use. The system was powered by an embedded medical grade rechargeable Li-ion battery for 7 h continuous operation between recharge via an inductive transcutaneous wireless power link at 2 MHz. Main results. Device verification and early validation were performed in both swine and non-human primate freely-moving animal models and showed that the wireless implant was electrically stable, effective in capturing and delivering broadband neural data, and safe for over one year of testing. In addition, we have used the multichannel data from these mobile animal models to demonstrate the ability to decode neural population dynamics associated with motor activity. Significance. We have developed an implanted wireless broadband neural recording device evaluated in non-human primate and swine. The use of this new implantable neural interface technology can provide insight into how to advance human neuroprostheses beyond the present early clinical trials. Further, such tools enable mobile patient use, have the potential for wider diagnosis of neurological conditions and will advance brain research.
People who suffer from hearing impairments can find it difficult to follow a conversation in a multi-speaker environment. Current hearing aids can suppress background noise; however, there is little that can be done to help a user attend to a single conversation amongst many without knowing which speaker the user is attending to. Cognitively controlled hearing aids that use auditory attention decoding (AAD) methods are the next step in offering help. Translating the successes in AAD research to real-world applications poses a number of challenges, including the lack of access to the clean sound sources in the environment with which to compare with the neural signals. We propose a novel framework that combines single-channel speech separation algorithms with AAD.
To study electrical stimulation of the lacrimal gland and afferent nerves for enhanced tear secretion, as a potential treatment for dry eye disease. We investigate the response pathways and electrical parameters to safely maximize tear secretion.
Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces.
Objective. Memory accuracy is a major problem in human disease and is the primary factor that defines Alzheimer’s, ageing and dementia resulting from impaired hippocampal function in the medial temporal lobe. Development of a hippocampal memory neuroprosthesis that facilitates normal memory encoding in nonhuman primates (NHPs) could provide the basis for improving memory in human disease states. Approach. NHPs trained to perform a short-term delayed match-to-sample (DMS) memory task were examined with multi-neuron recordings from synaptically connected hippocampal cell fields, CA1 and CA3. Recordings were analyzed utilizing a previously developed nonlinear multi-input multi-output (MIMO) neuroprosthetic model, capable of extracting CA3-to-CA1 spatiotemporal firing patterns during DMS performance. Main results. The MIMO model verified that specific CA3-to-CA1 firing patterns were critical for the successful encoding of sample phase information on more difficult DMS trials. This was validated by the delivery of successful MIMO-derived encoding patterns via electrical stimulation to the same CA1 recording locations during the sample phase which facilitated task performance in the subsequent, delayed match phase, on difficult trials that required more precise encoding of sample information. Significance. These findings provide the first successful application of a neuroprosthesis designed to enhance and/or repair memory encoding in primate brain.
Here, our objective was to develop a binary decoder to detect task engagement in humans during two distinct, conflict-based behavioral tasks. Effortful, goal-directed decision-making requires the coordinated action of multiple cognitive processes, including attention, working memory and action selection. That type of mental effort is often dysfunctional in mental disorders, e.g. when a patient attempts to overcome a depression or anxiety-driven habit but feels unable. If the onset of engagement in this type of focused mental activity could be reliably detected, decisional function might be augmented, e.g. through neurostimulation. However, there are no known algorithms for detecting task engagement with rapid time resolution.
Direct synthesis of speech from neural signals could provide a fast and natural way of communication to people with neurological diseases. Invasively-measured brain activity (electrocorticography; ECoG) supplies the necessary temporal and spatial resolution to decode fast and complex processes such as speech production. A number of impressive advances in speech decoding using neural signals have been achieved in recent years, but the complex dynamics are still not fully understood. However, it is unlikely that simple linear models can capture the relation between neural activity and continuous spoken speech. Approach. Here we show that deep neural networks can be used to map ECoG from speech production areas onto an intermediate representation of speech (logMel spectrogram). The proposed method uses a densely connected convolutional neural network topology which is well-suited to work with the small amount of data available from each participant. Main results. In a study with six participants, we achieved correlations up to r=0.69 between the reconstructed and original logMel spectrograms. We transfered our prediction back into an audible waveform by applying a Wavenet vocoder. The vocoder was conditioned on logMel features that harnessed a much larger, pre-existing data corpus to provide the most natural acoustic output. Significance. To the best of our knowledge, this is the first time that high-quality speech has been reconstructed from neural recordings during speech production using deep neural networks.