Concept: Depth perception
A 19-year-old man presented with a mass in his right eye that had been present since birth but had gradually increased in size. The mass caused vision defects, mild discomfort on blinking, and the intermittent sensation of the presence of a foreign body.
Stereopsis is the ability to estimate distance based on the different views seen in the two eyes [1-5]. It is an important model perceptual system in neuroscience and a major area of machine vision. Mammalian, avian, and almost all machine stereo algorithms look for similarities between the luminance-defined images in the two eyes, using a series of computations to produce a map showing how depth varies across the scene [3, 4, 6-14]. Stereopsis has also evolved in at least one invertebrate, the praying mantis [15-17]. Mantis stereopsis is presumed to be simpler than vertebrates' [15, 18], but little is currently known about the underlying computations. Here, we show that mantis stereopsis uses a fundamentally different computational algorithm from vertebrate stereopsis-rather than comparing luminance in the two eyes' images directly, mantis stereopsis looks for regions of the images where luminance is changing. Thus, while there is no evidence that mantis stereopsis works at all with static images, it successfully reveals the distance to a moving target even in complex visual scenes with targets that are perfectly camouflaged against the background in terms of texture. Strikingly, these insects outperform human observers at judging stereoscopic distance when the pattern of luminance in the two eyes does not match. Insect stereopsis has thus evolved to be computationally efficient while being robust to poor image resolution and to discrepancies in the pattern of luminance between the two eyes.
Stereopsis - 3D vision - has become widely used as a model of perception. However, all our knowledge of possible underlying mechanisms comes almost exclusively from vertebrates. While stereopsis has been demonstrated for one invertebrate, the praying mantis, a lack of techniques to probe invertebrate stereopsis has prevented any further progress for three decades. We therefore developed a stereoscopic display system for insects, using miniature 3D glasses to present separate images to each eye, and tested our ability to deliver stereoscopic illusions to praying mantises. We find that while filtering by circular polarization failed due to excessive crosstalk, “anaglyph” filtering by spectral content clearly succeeded in giving the mantis the illusion of 3D depth. We thus definitively demonstrate stereopsis in mantises and also demonstrate that the anaglyph technique can be effectively used to deliver virtual 3D stimuli to insects. This method opens up broad avenues of research into the parallel evolution of stereoscopic computations and possible new algorithms for depth perception.
This paper demonstrates how multi-scale measures of rugosity, slope and aspect can be derived from fine-scale bathymetric reconstructions created from geo-referenced stereo imagery. We generate three-dimensional reconstructions over large spatial scales using data collected by Autonomous Underwater Vehicles (AUVs), Remotely Operated Vehicles (ROVs), manned submersibles and diver-held imaging systems. We propose a new method for calculating rugosity in a Delaunay triangulated surface mesh by projecting areas onto the plane of best fit using Principal Component Analysis (PCA). Slope and aspect can be calculated with very little extra effort, and fitting a plane serves to decouple rugosity from slope. We compare the results of the virtual terrain complexity calculations with experimental results using conventional in-situ measurement methods. We show that performing calculations over a digital terrain reconstruction is more flexible, robust and easily repeatable. In addition, the method is non-contact and provides much less environmental impact compared to traditional survey techniques. For diver-based surveys, the time underwater needed to collect rugosity data is significantly reduced and, being a technique based on images, it is possible to use robotic platforms that can operate beyond diver depths. Measurements can be calculated exhaustively at multiple scales for surveys with tens of thousands of images covering thousands of square metres. The technique is demonstrated on data gathered by a diver-rig and an AUV, on small single-transect surveys and on a larger, dense survey that covers over [Formula: see text]. Stereo images provide 3D structure as well as visual appearance, which could potentially feed into automated classification techniques. Our multi-scale rugosity, slope and aspect measures have already been adopted in a number of marine science studies. This paper presents a detailed description of the method and thoroughly validates it against traditional in-situ measurements.
A stereoscope displays 2-D images with binocular disparities (stereograms), which fuse to form a 3-D stereoscopic object. But a stereoscopic object creates a conflict between vergence and accommodation. Also, motion in depth of a stereoscopic object simulated solely from change in target vergence produces anomalous motion parallax and anomalous changes in perspective. We describe a new instrument, which overcomes these problems. We call it the dichoptiscope. It resembles a mirror stereoscope, but instead of stereograms, it displays identical 2-D or 3-D physical objects to each eye. When a pair of the physical, monocular objects is fused, they create a dichoptic object that is visually identical to a real object. There is no conflict between vergence and accommodation, and motion parallax is normal. When the monocular objects move in real depth, the dichoptic object also moves in depth. The instrument allows the experimenter to control independently each of several cues to motion in depth. These cues include changes in the size of the images, changes in the vergence of the eyes, changes in binocular disparity within the moving object, and changes in the relative disparity between the moving object and a stationary object.
The ability to estimate the distance of objects from one’s self and from each other is fundamental to a variety of behaviours from grasping objects to navigating. The main cue to distance, stereopsis, relies on the slight offsets between the images derived from our left and right eyes, also termed disparities. Here we ask whether the precision of stereopsis varies with professional experience with precise manual tasks. We measured stereo-acuities of dressmakers and non-dressmakers for both absolute and relative disparities. We used a stereoscope and a computerized test removing monocular cues. We also measured vergence noise and bias using the Nonius line technique. We demonstrate that dressmakers' stereoscopic acuities are better than those of non-dressmakers, for both absolute and relative disparities. In contrast, vergence noise and bias were comparable in the two groups. Two non-exclusive mechanisms may be at the source of the group difference we document: (i) self-selection or the fact that stereo-vision is functionally important to become a dressmaker, and (ii) plasticity, or the fact that training on demanding stereovision tasks improves stereo-acuity.
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 5 years ago
The ability to acquire images under low-light conditions is critical for many applications. However, to date, strategies toward improving low-light imaging primarily focus on developing electronic image sensors. Inspired by natural scotopic visual systems, we adopt an all-optical method to significantly improve the overall photosensitivity of imaging systems. Such optical approach is independent of, and can effectively circumvent the physical and material limitations of, the electronics imagers used. We demonstrate an artificial eye inspired by superposition compound eyes and the retinal structure of elephantnose fish. The bioinspired photosensitivity enhancer (BPE) that we have developed enhances the image intensity without consuming power, which is achieved by three-dimensional, omnidirectionally aligned microphotocollectors with parabolic reflective sidewalls. Our work opens up a previously unidentified direction toward achieving high photosensitivity in imaging systems.
Fluorescence nanoscopy, or super-resolution microscopy, has become an important tool in cell biological research. However, because of its usually inferior resolution in the depth direction (50-80 nm) and rapidly deteriorating resolution in thick samples, its practical biological application has been effectively limited to two dimensions and thin samples. Here, we present the development of whole-cell 4Pi single-molecule switching nanoscopy (W-4PiSMSN), an optical nanoscope that allows imaging of three-dimensional (3D) structures at 10- to 20-nm resolution throughout entire mammalian cells. We demonstrate the wide applicability of W-4PiSMSN across diverse research fields by imaging complex molecular architectures ranging from bacteriophages to nuclear pores, cilia, and synaptonemal complexes in large 3D cellular volumes.
A 42-year-old male electrician presented to the eye clinic with decreasing vision 4 weeks after an electrical burn of 14,000 V to the left shoulder. His vision in both eyes was limited to perception of hand motions, with an intraocular pressure of 14 mm Hg in each eye.
Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.