Concept: Visual system
The century-old idea that stripes make zebras cryptic to large carnivores has never been examined systematically. We evaluated this hypothesis by passing digital images of zebras through species-specific spatial and colour filters to simulate their appearance for the visual systems of zebras' primary predators and zebras themselves. We also measured stripe widths and luminance contrast to estimate the maximum distances from which lions, spotted hyaenas, and zebras can resolve stripes. We found that beyond ca. 50 m (daylight) and 30 m (twilight) zebra stripes are difficult for the estimated visual systems of large carnivores to resolve, but not humans. On moonless nights, stripes are difficult for all species to resolve beyond ca. 9 m. In open treeless habitats where zebras spend most time, zebras are as clearly identified by the lion visual system as are similar-sized ungulates, suggesting that stripes cannot confer crypsis by disrupting the zebra’s outline. Stripes confer a minor advantage over solid pelage in masking body shape in woodlands, but the effect is stronger for humans than for predators. Zebras appear to be less able than humans to resolve stripes although they are better than their chief predators. In conclusion, compared to the uniform pelage of other sympatric herbivores it appears highly unlikely that stripes are a form of anti-predator camouflage.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 6 years ago
Astronomers and physicists noticed centuries ago that visual spatial resolution is higher for dark than light stimuli, but the neuronal mechanisms for this perceptual asymmetry remain unknown. Here we demonstrate that the asymmetry is caused by a neuronal nonlinearity in the early visual pathway. We show that neurons driven by darks (OFF neurons) increase their responses roughly linearly with luminance decrements, independent of the background luminance. However, neurons driven by lights (ON neurons) saturate their responses with small increases in luminance and need bright backgrounds to approach the linearity of OFF neurons. We show that, as a consequence of this difference in linearity, receptive fields are larger in ON than OFF thalamic neurons, and cortical neurons are more strongly driven by darks than lights at low spatial frequencies. This ON/OFF asymmetry in linearity could be demonstrated in the visual cortex of cats, monkeys, and humans and in the cat visual thalamus. Furthermore, in the cat visual thalamus, we show that the neuronal nonlinearity is present at the ON receptive field center of ON-center neurons and ON receptive field surround of OFF-center neurons, suggesting an origin at the level of the photoreceptor. These results demonstrate a fundamental difference in visual processing between ON and OFF channels and reveal a competitive advantage for OFF neurons over ON neurons at low spatial frequencies, which could be important during cortical development when retinal images are blurred by immature optics in infant eyes.
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.
Non-human primates evaluate food quality based on brightness of red and green shades of color, with red signaling higher energy or greater protein content in fruits and leafs. Despite the strong association between food and other sensory modalities, humans, too, estimate critical food features, such as calorie content, from vision. Previous research primarily focused on the effects of color on taste/flavor identification and intensity judgments. However, whether evaluation of perceived calorie content and arousal in humans are biased by color has received comparatively less attention. In this study we showed that color content of food images predicts arousal and perceived calorie content reported when viewing food even when confounding variables were controlled for. Specifically, arousal positively co-varied with red-brightness, while green-brightness was negatively associated with arousal and perceived calorie content. This result holds for a large array of food comprising of natural food - where color likely predicts calorie content - and of transformed food where, instead, color is poorly diagnostic of energy content. Importantly, this pattern does not emerged with nonfood items. We conclude that in humans visual inspection of food is central to its evaluation and seems to partially engage the same basic system as non-human primates.
Reversal of end-stage retinal degeneration and restoration of visual function by photoreceptor transplantation
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 7 years ago
One strategy to restore vision in retinitis pigmentosa and age-related macular degeneration is cell replacement. Typically, patients lose vision when the outer retinal photoreceptor layer is lost, and so the therapeutic goal would be to restore vision at this stage of disease. It is not currently known if a degenerate retina lacking the outer nuclear layer of photoreceptor cells would allow the survival, maturation, and reconnection of replacement photoreceptors, as prior studies used hosts with a preexisting outer nuclear layer at the time of treatment. Here, using a murine model of severe human retinitis pigmentosa at a stage when no host rod cells remain, we show that transplanted rod precursors can reform an anatomically distinct and appropriately polarized outer nuclear layer. A trilaminar organization was returned to rd1 hosts that had only two retinal layers before treatment. The newly introduced precursors were able to resume their developmental program in the degenerate host niche to become mature rods with light-sensitive outer segments, reconnecting with host neurons downstream. Visual function, assayed in the same animals before and after transplantation, was restored in animals with zero rod function at baseline. These observations suggest that a cell therapy approach may reconstitute a light-sensitive cell layer de novo and hence repair a structurally damaged visual circuit. Rather than placing discrete photoreceptors among preexisting host outer retinal cells, total photoreceptor layer reconstruction may provide a clinically relevant model to investigate cell-based strategies for retinal repair.
While the different sensory modalities are sensitive to different stimulus energies, they are often charged with extracting analogous information about the environment. Neural systems may thus have evolved to implement similar algorithms across modalities to extract behaviorally relevant stimulus information, leading to the notion of a canonical computation. In both vision and touch, information about motion is extracted from a spatiotemporal pattern of activation across a sensory sheet (in the retina and in the skin, respectively), a process that has been extensively studied in both modalities. In this essay, we examine the processing of motion information as it ascends the primate visual and somatosensory neuraxes and conclude that similar computations are implemented in the two sensory systems.
In human vision, acuity and color sensitivity are greatest at the center of fixation and fall off rapidly as visual eccentricity increases. Humans exploit the high resolution of central vision by actively moving their eyes three to four times each second. Here we demonstrate that it is possible to classify the task that a person is engaged in from their eye movements using multivariate pattern classification. The results have important theoretical implications for computational and neural models of eye movement control. They also have important practical implications for using passively recorded eye movements to infer the cognitive state of a viewer, information that can be used as input for intelligent human-computer interfaces and related applications.
Myopia, or nearsightedness, is the most common eye disorder, resulting primarily from excess elongation of the eye. The etiology of myopia, although known to be complex, is poorly understood. Here we report the largest ever genome-wide association study (45,771 participants) on myopia in Europeans. We performed a survival analysis on age of myopia onset and identified 22 significant associations ([Formula: see text]), two of which are replications of earlier associations with refractive error. Ten of the 20 novel associations identified replicate in a separate cohort of 8,323 participants who reported if they had developed myopia before age 10. These 22 associations in total explain 2.9% of the variance in myopia age of onset and point toward a number of different mechanisms behind the development of myopia. One association is in the gene , which has previously been linked to abnormally small eyes; one is in a gene that forms part of the extracellular matrix (); two are in or near genes involved in the regeneration of 11-cis-retinal ( and ); two are near genes known to be involved in the growth and guidance of retinal ganglion cells (, ); and five are in or near genes involved in neuronal signaling or development. These novel findings point toward multiple genetic factors involved in the development of myopia and suggest that complex interactions between extracellular matrix remodeling, neuronal development, and visual signals from the retina may underlie the development of myopia in humans.
The purpose of this study was to evaluate the visual outcome of chronic occupational exposure to a mixture of organic solvents by measuring color discrimination, achromatic contrast sensitivity and visual fields in a group of gas station workers. We tested 25 workers (20 males) and 25 controls with no history of chronic exposure to solvents (10 males). All participants had normal ophthalmologic exams. Subjects had worked in gas stations on an average of 9.6 ± 6.2 years. Color vision was evaluated with the Lanthony D15d and Cambridge Colour Test (CCT). Visual field assessment consisted of white-on-white 24-2 automatic perimetry (Humphrey II-750i). Contrast sensitivity was measured for sinusoidal gratings of 0.2, 0.5, 1.0, 2.0, 5.0, 10.0 and 20.0 cycles per degree (cpd). Results from both groups were compared using the Mann-Whitney U test. The number of errors in the D15d was higher for workers relative to controls (p<0.01). Their CCT color discrimination thresholds were elevated compared to the control group along the protan, deutan and tritan confusion axes (p<0.01), and their ellipse area and ellipticity were higher (p<0.01). Genetic analysis of subjects with very elevated color discrimination thresholds excluded congenital causes for the visual losses. Automated perimetry thresholds showed elevation in the 9°, 15° and 21° of eccentricity (p<0.01) and in MD and PSD indexes (p<0.01). Contrast sensitivity losses were found for all spatial frequencies measured (p<0.01) except for 0.5 cpd. Significant correlation was found between previous working years and deutan axis thresholds (rho = 0.59; p<0.05), indexes of the Lanthony D15d (rho=0.52; p<0.05), perimetry results in the fovea (rho= -0.51; p<0.05) and at 3, 9 and 15 degrees of eccentricity (rho= -0.46; p<0.05). Extensive and diffuse visual changes were found, suggesting that specific occupational limits should be created.
Humans and other animals are surprisingly adept at estimating the duration of temporal intervals, even without the use of watches and clocks. This ability is typically studied in the lab by asking observers to indicate their estimate of the time between two external sensory events. The results of such studies confirm that humans can accurately estimate durations on a variety of time scales. Although many brain areas are thought to contribute to the representation of elapsed time, recent neurophysiological studies have linked the parietal cortex in particular to the perception of sub-second time intervals. In this Primer, we describe previous work on parietal cortex and time perception, and we highlight the findings of a study published in this issue of PLOS Biology, in which Schneider and Ghose  characterize single-neuron responses during performance of a novel “Temporal Production” task. During temporal production, the observer must track the passage of time without anticipating any external sensory event, and it appears that the parietal cortex may use a unique strategy to support this type of measurement.