Concept: Virtual reality
Brain-machine interfaces (BMIs) provide a new assistive strategy aimed at restoring mobility in severely paralyzed patients. Yet, no study in animals or in human subjects has indicated that long-term BMI training could induce any type of clinical recovery. Eight chronic (3-13 years) spinal cord injury (SCI) paraplegics were subjected to long-term training (12 months) with a multi-stage BMI-based gait neurorehabilitation paradigm aimed at restoring locomotion. This paradigm combined intense immersive virtual reality training, enriched visual-tactile feedback, and walking with two EEG-controlled robotic actuators, including a custom-designed lower limb exoskeleton capable of delivering tactile feedback to subjects. Following 12 months of training with this paradigm, all eight patients experienced neurological improvements in somatic sensation (pain localization, fine/crude touch, and proprioceptive sensing) in multiple dermatomes. Patients also regained voluntary motor control in key muscles below the SCI level, as measured by EMGs, resulting in marked improvement in their walking index. As a result, 50% of these patients were upgraded to an incomplete paraplegia classification. Neurological recovery was paralleled by the reemergence of lower limb motor imagery at cortical level. We hypothesize that this unprecedented neurological recovery results from both cortical and spinal cord plasticity triggered by long-term BMI usage.
Past research has found that playing a classic prosocial video game resulted in heightened prosocial behavior when compared to a control group, whereas playing a classic violent video game had no effect. Given purported links between violent video games and poor social behavior, this result is surprising. Here our aim was to assess whether this finding may be due to the specific games used. That is, modern games are experienced differently from classic games (more immersion in virtual environments, more connection with characters, etc.) and it may be that playing violent video games impacts prosocial behavior only when contemporary versions are used.
Improvements in software and design and reduction in cost have made virtual reality (VR) a practical tool for immersive, three-dimensional (3D), multisensory experiences that distract patients from painful stimuli.
Prior studies have shown that spatial cognition is influenced by stress prior to task. The current study investigated the effects of real-time acute stress on allocentric and egocentric spatial processing. A virtual reality-based spatial reference rule learning (SRRL) task was designed in which participants were instructed to make a location selection by walking to one of three poles situated around a tower. A selection was reinforced by either an egocentric spatial reference rule (leftmost or rightmost pole relative to participant) or an allocentric spatial reference rule (nearest or farthest pole relative to the tower). In Experiment 1, 32 participants (16 males, 16 females; aged from 18 to 27) performed a SRRL task in a normal virtual reality environment (VRE). The hit rates and rule acquisition revealed no difference between allocentric and egocentric spatial reference rule learning. In Experiment 2, 64 participants (32 males, 34 females; aged from 19 to 30) performed the SRRL task in both a low-stress VRE (a mini virtual arena) and a high-stress VRE (mini virtual arena with a fire disaster). Allocentric references facilitated learning in the high-stressful VRE. The results suggested that acute stress facilitate allocentric spatial processing.
Recent studies have shown that playing prosocial video games leads to greater subsequent prosocial behavior in the real world. However, immersive virtual reality allows people to occupy avatars that are different from them in a perceptually realistic manner. We examine how occupying an avatar with the superhero ability to fly increases helping behavior.
Electronic skins equipped with artificial receptors are able to extend our perception beyond the modalities that have naturally evolved. These synthetic receptors offer complimentary information on our surroundings and endow us with novel means of manipulating physical or even virtual objects. We realize highly compliant magnetosensitive skins with directional perception that enable magnetic cognition, body position tracking, and touchless object manipulation. Transfer printing of eight high-performance spin valve sensors arranged into two Wheatstone bridges onto 1.7-μm-thick polyimide foils ensures mechanical imperceptibility. This resembles a new class of interactive devices extracting information from the surroundings through magnetic tags. We demonstrate this concept in augmented reality systems with virtual knob-turning functions and the operation of virtual dialing pads, based on the interaction with magnetic fields. This technology will enable a cornucopia of applications from navigation, motion tracking in robotics, regenerative medicine, and sports and gaming to interaction in supplemented reality.
Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes
- Proceedings of the National Academy of Sciences of the United States of America
- Published over 4 years ago
An illusory sensation of ownership over a surrogate limb or whole body can be induced through specific forms of multisensory stimulation, such as synchronous visuotactile tapping on the hidden real and visible rubber hand in the rubber hand illusion. Such methods have been used to induce ownership over a manikin and a virtual body that substitute the real body, as seen from first-person perspective, through a head-mounted display. However, the perceptual and behavioral consequences of such transformed body ownership have hardly been explored. In Exp. 1, immersive virtual reality was used to embody 30 adults as a 4-y-old child (condition C), and as an adult body scaled to the same height as the child (condition A), experienced from the first-person perspective, and with virtual and real body movements synchronized. The result was a strong body-ownership illusion equally for C and A. Moreover there was an overestimation of the sizes of objects compared with a nonembodied baseline, which was significantly greater for C compared with A. An implicit association test showed that C resulted in significantly faster reaction times for the classification of self with child-like compared with adult-like attributes. Exp. 2 with an additional 16 participants extinguished the ownership illusion by using visuomotor asynchrony, with all else equal. The size-estimation and implicit association test differences between C and A were also extinguished. We conclude that there are perceptual and probably behavioral correlates of body-ownership illusions that occur as a function of the type of body in which embodiment occurs.
Immersive virtual reality can be used to visually substitute a person’s real body by a life-sized virtual body (VB) that is seen from first person perspective. Using real-time motion capture the VB can be programmed to move synchronously with the real body (visuomotor synchrony), and also virtual objects seen to strike the VB can be felt through corresponding vibrotactile stimulation on the actual body (visuotactile synchrony). This setup typically gives rise to a strong perceptual illusion of ownership over the VB. When the viewpoint is lifted up and out of the VB so that it is seen below this may result in an out-of-body experience (OBE). In a two-factor between-groups experiment with 16 female participants per group we tested how fear of death might be influenced by two different methods for producing an OBE. In an initial embodiment phase where both groups experienced the same multisensory stimuli there was a strong feeling of body ownership. Then the viewpoint was lifted up and behind the VB. In the experimental group once the viewpoint was out of the VB there was no further connection with it (no visuomotor or visuotactile synchrony). In a control condition, although the viewpoint was in the identical place as in the experimental group, visuomotor and visuotactile synchrony continued. While both groups reported high scores on a question about their OBE illusion, the experimental group had a greater feeling of disownership towards the VB below compared to the control group, in line with previous findings. Fear of death in the experimental group was found to be lower than in the control group. This is in line with previous reports that naturally occurring OBEs are often associated with enhanced belief in life after death.
Advances in Virtual Reality (VR) technologies allow the investigation of simulated moral actions in visually immersive environments. Using a robotic manipulandum and an interactive sculpture, we now also incorporate realistic haptic feedback into virtual moral simulations. In two experiments, we found that participants responded with greater utilitarian actions in virtual and haptic environments when compared to traditional questionnaire assessments of moral judgments. In experiment one, when incorporating a robotic manipulandum, we found that the physical power of simulated utilitarian responses (calculated as the product of force and speed) was predicted by individual levels of psychopathy. In experiment two, which integrated an interactive and life-like sculpture of a human into a VR simulation, greater utilitarian actions continued to be observed. Together, these results support a disparity between simulated moral action and moral judgment. Overall this research combines state-of-the-art virtual reality, robotic movement simulations, and realistic human sculptures, to enhance moral paradigms that are often contextually impoverished. As such, this combination provides a better assessment of simulated moral action, and illustrates the embodied nature of morally-relevant actions.
Physical practice with one hand results in performance gains of the other (un-practiced) hand, yet the role of sensory feedback and underlying neurophysiology is unclear. Healthy subjects learned sequences of finger movements by physical training with their right hand while receiving real-time movement-based visual feedback via 3D virtual reality devices as if their immobile left hand was training. This manipulation resulted in significantly enhanced performance gain with the immobile hand, which was further increased when left-hand fingers were yoked to passively follow right-hand voluntary movements. Neuroimaging data show that, during training with manipulated visual feedback, activity in the left and right superior parietal lobule and their degree of coupling with motor and visual cortex, respectively, correlate with subsequent left-hand performance gain. These results point to a neural network subserving short-term motor skill learning and may have implications for developing new approaches for learning and rehabilitation in patients with unilateral motor deficits.