Concept: Visual perception
A brain-to-brain interface (BTBI) enabled a real-time transfer of behaviorally meaningful sensorimotor information between the brains of two rats. In this BTBI, an “encoder” rat performed sensorimotor tasks that required it to select from two choices of tactile or visual stimuli. While the encoder rat performed the task, samples of its cortical activity were transmitted to matching cortical areas of a “decoder” rat using intracortical microstimulation (ICMS). The decoder rat learned to make similar behavioral selections, guided solely by the information provided by the encoder rat’s brain. These results demonstrated that a complex system was formed by coupling the animals' brains, suggesting that BTBIs can enable dyads or networks of animal’s brains to exchange, process, and store information and, hence, serve as the basis for studies of novel types of social interaction and for biological computing devices.
We present, to our knowledge, the first demonstration that a non-invasive brain-to-brain interface (BBI) can be used to allow one human to guess what is on the mind of another human through an interactive question-and-answering paradigm similar to the “20 Questions” game. As in previous non-invasive BBI studies in humans, our interface uses electroencephalography (EEG) to detect specific patterns of brain activity from one participant (the “respondent”), and transcranial magnetic stimulation (TMS) to deliver functionally-relevant information to the brain of a second participant (the “inquirer”). Our results extend previous BBI research by (1) using stimulation of the visual cortex to convey visual stimuli that are privately experienced and consciously perceived by the inquirer; (2) exploiting real-time rather than off-line communication of information from one brain to another; and (3) employing an interactive task, in which the inquirer and respondent must exchange information bi-directionally to collaboratively solve the task. The results demonstrate that using the BBI, ten participants (five inquirer-respondent pairs) can successfully identify a “mystery item” using a true/false question-answering protocol similar to the “20 Questions” game, with high levels of accuracy that are significantly greater than a control condition in which participants were connected through a sham BBI.
A 19-year-old man presented with a mass in his right eye that had been present since birth but had gradually increased in size. The mass caused vision defects, mild discomfort on blinking, and the intermittent sensation of the presence of a foreign body.
- Proceedings. Biological sciences / The Royal Society
- Published over 4 years ago
The effects of selectively different experience of eye contact and gaze behaviour on the early development of five sighted infants of blind parents were investigated. Infants were assessed longitudinally at 6-10, 12-15 and 24-47 months. Face scanning and gaze following were assessed using eye tracking. In addition, established measures of autistic-like behaviours and standardized tests of cognitive, motor and linguistic development, as well as observations of naturalistic parent-child interaction were collected. These data were compared with those obtained from a larger group of sighted infants of sighted parents. Infants with blind parents did not show an overall decrease in eye contact or gaze following when they observed sighted adults on video or in live interactions, nor did they show any autistic-like behaviours. However, they directed their own eye gaze somewhat less frequently towards their blind mothers and also showed improved performance in visual memory and attention at younger ages. Being reared with significantly reduced experience of eye contact and gaze behaviour does not preclude sighted infants from developing typical gaze processing and other social-communication skills. Indeed, the need to switch between different types of communication strategy may actually enhance other skills during development.
In human vision, acuity and color sensitivity are greatest at the center of fixation and fall off rapidly as visual eccentricity increases. Humans exploit the high resolution of central vision by actively moving their eyes three to four times each second. Here we demonstrate that it is possible to classify the task that a person is engaged in from their eye movements using multivariate pattern classification. The results have important theoretical implications for computational and neural models of eye movement control. They also have important practical implications for using passively recorded eye movements to infer the cognitive state of a viewer, information that can be used as input for intelligent human-computer interfaces and related applications.
The role of the motor system in the perception of visual art remains to be better understood. Earlier studies on the visual perception of abstract art (from Gestalt theory, as in Arnheim, 1954 and 1988, to balance preference studies as in Locher and Stappers, 2002, and more recent work by Locher et al., 2007; Redies, 2007, and Taylor et al., 2011), neglected the question, while the field of neuroesthetics (Ramachandran and Hirstein, 1999; Zeki, 1999) mostly concentrated on figurative works. Much recent work has demonstrated the multimodality of vision, encompassing the activation of motor, somatosensory, and viscero-motor brain regions. The present study investigated whether the observation of high-resolution digitized static images of abstract paintings by Lucio Fontana is associated with specific cortical motor activation in the beholder’s brain. Mu rhythm suppression was evoked by the observation of original art works but not by control stimuli (as in the case of graphically modified versions of these works). Most interestingly, previous visual exposure to the stimuli did not affect the mu rhythm suppression induced by their observation. The present results clearly show the involvement of the cortical motor system in the viewing of static abstract art works.
The purpose of this study was to evaluate the visual outcome of chronic occupational exposure to a mixture of organic solvents by measuring color discrimination, achromatic contrast sensitivity and visual fields in a group of gas station workers. We tested 25 workers (20 males) and 25 controls with no history of chronic exposure to solvents (10 males). All participants had normal ophthalmologic exams. Subjects had worked in gas stations on an average of 9.6 ± 6.2 years. Color vision was evaluated with the Lanthony D15d and Cambridge Colour Test (CCT). Visual field assessment consisted of white-on-white 24-2 automatic perimetry (Humphrey II-750i). Contrast sensitivity was measured for sinusoidal gratings of 0.2, 0.5, 1.0, 2.0, 5.0, 10.0 and 20.0 cycles per degree (cpd). Results from both groups were compared using the Mann-Whitney U test. The number of errors in the D15d was higher for workers relative to controls (p<0.01). Their CCT color discrimination thresholds were elevated compared to the control group along the protan, deutan and tritan confusion axes (p<0.01), and their ellipse area and ellipticity were higher (p<0.01). Genetic analysis of subjects with very elevated color discrimination thresholds excluded congenital causes for the visual losses. Automated perimetry thresholds showed elevation in the 9°, 15° and 21° of eccentricity (p<0.01) and in MD and PSD indexes (p<0.01). Contrast sensitivity losses were found for all spatial frequencies measured (p<0.01) except for 0.5 cpd. Significant correlation was found between previous working years and deutan axis thresholds (rho = 0.59; p<0.05), indexes of the Lanthony D15d (rho=0.52; p<0.05), perimetry results in the fovea (rho= -0.51; p<0.05) and at 3, 9 and 15 degrees of eccentricity (rho= -0.46; p<0.05). Extensive and diffuse visual changes were found, suggesting that specific occupational limits should be created.
Stereopsis - 3D vision - has become widely used as a model of perception. However, all our knowledge of possible underlying mechanisms comes almost exclusively from vertebrates. While stereopsis has been demonstrated for one invertebrate, the praying mantis, a lack of techniques to probe invertebrate stereopsis has prevented any further progress for three decades. We therefore developed a stereoscopic display system for insects, using miniature 3D glasses to present separate images to each eye, and tested our ability to deliver stereoscopic illusions to praying mantises. We find that while filtering by circular polarization failed due to excessive crosstalk, “anaglyph” filtering by spectral content clearly succeeded in giving the mantis the illusion of 3D depth. We thus definitively demonstrate stereopsis in mantises and also demonstrate that the anaglyph technique can be effectively used to deliver virtual 3D stimuli to insects. This method opens up broad avenues of research into the parallel evolution of stereoscopic computations and possible new algorithms for depth perception.
Are we able to infer what happened to a person from a brief sample of his/her behaviour? It has been proposed that mentalising skills can be used to retrodict as well as predict behaviour, that is, to determine what mental states of a target have already occurred. The current study aimed to develop a paradigm to explore these processes, which takes into account the intricacies of real-life situations in which reasoning about mental states, as embodied in behaviour, may be utilised. A novel task was devised which involved observing subtle and naturalistic reactions of others in order to determine the event that had previously taken place. Thirty-five participants viewed videos of real individuals reacting to the researcher behaving in one of four possible ways, and were asked to judge which of the four ‘scenarios’ they thought the individual was responding to. Their eye movements were recorded to establish the visual strategies used. Participants were able to deduce successfully from a small sample of behaviour which scenario had previously occurred. Surprisingly, looking at the eye region was associated with poorer identification of the scenarios, and eye movement strategy varied depending on the event experienced by the person in the video. This suggests people flexibly deploy their attention using a retrodictive mindreading process to infer events.
Transcranial magnetic stimulation (TMS) allows for non-invasive interference with ongoing neural processing. Applied in a chronometric design over early visual cortex (EVC), TMS has proved valuable in indicating at which particular time point EVC must remain unperturbed for (conscious) vision to be established. In the current study, we set out to examine the effect of EVC TMS across a broad range of time points, both before (pre-stimulus) and after (post-stimulus) the onset of symbolic visual stimuli. Behavioral priming studies have shown that the behavioral impact of a visual stimulus can be independent from its conscious perception, suggesting two independent neural signatures. To assess whether TMS-induced suppression of visual awareness can be dissociated from behavioral priming in the temporal domain, we thus implemented three different measures of visual processing, namely performance on a standard visual discrimination task, a subjective rating of stimulus visibility, and a visual priming task. To control for non-neural TMS effects, we performed electrooculographical recordings, placebo TMS (sham), and control site TMS (vertex). Our results suggest that, when considering the appropriate control data, the temporal pattern of EVC TMS disruption on visual discrimination, subjective awareness and behavioral priming are not dissociable. Instead, TMS to EVC disrupts visual perception holistically, both when applied before and after the onset of a visual stimulus. The current findings are discussed in light of their implications on models of visual awareness and (subliminal) priming.