SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: IMAGE

169

We report Giga-pixel lensfree holographic microscopy and tomography using color sensor-arrays such as CMOS imagers that exhibit Bayer color filter patterns. Without physically removing these color filters coated on the sensor chip, we synthesize pixel super-resolved lensfree holograms, which are then reconstructed to achieve ∼350 nm lateral resolution, corresponding to a numerical aperture of ∼0.8, across a field-of-view of ∼20.5 mm(2). This constitutes a digital image with ∼0.7 Billion effective pixels in both amplitude and phase channels (i.e., ∼1.4 Giga-pixels total). Furthermore, by changing the illumination angle (e.g., ±50°) and scanning a partially-coherent light source across two orthogonal axes, super-resolved images of the same specimen from different viewing angles are created, which are then digitally combined to synthesize tomographic images of the object. Using this dual-axis lensfree tomographic imager running on a color sensor-chip, we achieve a 3D spatial resolution of ∼0.35 µm×0.35 µm×∼2 µm, in x, y and z, respectively, creating an effective voxel size of ∼0.03 µm(3) across a sample volume of ∼5 mm(3), which is equivalent to >150 Billion voxels. We demonstrate the proof-of-concept of this lensfree optical tomographic microscopy platform on a color CMOS image sensor by creating tomograms of micro-particles as well as a wild-type C. elegans nematode.

Concepts: Optics, Computer graphics, Digital photography, IMAGE, Image sensor, Pixel, Color filter array, Bayer filter

151

Orthogonal polarized spectral (OPS) and sidestream dark field (SDF) imaging video microscope devices were introduced for observation of the microcirculation but, due to technical limitations, have remained as research tools. Recently, a novel handheld microscope based on incident dark field illumination (IDF) has been introduced for clinical use. The Cytocam-IDF imaging device consists of a pen-like probe incorporating IDF illumination with a set of high-resolution lenses projecting images on to a computer controlled image sensor synchronized with very short pulsed illumination light. This study was performed to validate Cytocam-IDF imaging by comparison to SDF imaging in volunteers.

Concepts: Optics, Light, Microscope, Computer graphics, Microscopy, IMAGE, Image processing, Dark field microscopy

108

The ability to acquire images under low-light conditions is critical for many applications. However, to date, strategies toward improving low-light imaging primarily focus on developing electronic image sensors. Inspired by natural scotopic visual systems, we adopt an all-optical method to significantly improve the overall photosensitivity of imaging systems. Such optical approach is independent of, and can effectively circumvent the physical and material limitations of, the electronics imagers used. We demonstrate an artificial eye inspired by superposition compound eyes and the retinal structure of elephantnose fish. The bioinspired photosensitivity enhancer (BPE) that we have developed enhances the image intensity without consuming power, which is achieved by three-dimensional, omnidirectionally aligned microphotocollectors with parabolic reflective sidewalls. Our work opens up a previously unidentified direction toward achieving high photosensitivity in imaging systems.

Concepts: Optics, Retina, Eye, Vision, Visual perception, Visual system, Depth perception, IMAGE

46

High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm(2) using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.

Concepts: Cancer, Breast cancer, Diffraction, Coherence, Holography, IMAGE, Image processing, Interferometric microscopy

41

In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom-a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements.

Concepts: IMAGE, Immanuel Kant, Photography, A priori, A priori and a posteriori, Image processing, Digital camera, Frame rate

38

Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high-frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies.

Concepts: Biology, Optics, Microscope, Microscopy, IMAGE, Confocal microscopy, Fluorescence microscope, Timeline of microscope technology

32

Computational imaging enables retrieval of the spatial information of an object with the use of single-pixel detectors. By projecting a series of known random patterns and measuring the backscattered intensity, it is possible to reconstruct a two-dimensional (2D) image. We used several single-pixel detectors in different locations to capture the 3D form of an object. From each detector we derived a 2D image that appeared to be illuminated from a different direction, even though only a single digital projector was used for illumination. From the shading of the images, the surface gradients could be derived and the 3D object reconstructed. We compare our result to that obtained from a stereophotogrammetric system using multiple cameras. Our simplified approach to 3D imaging can readily be extended to nonvisible wavebands.

Concepts: Optics, Dimension, Computer graphics, Space, IMAGE, 3D computer graphics, 2D computer graphics, Rendering

29

The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream.

Concepts: Present, Time, Brain, Cerebrum, Visual perception, IMAGE, Thalamus, Lateral geniculate nucleus

29

This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon’s behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

Concepts: Psychology, Computer, Java, IMAGE, Behaviorism, Object-oriented programming, User interface, Applied behavior analysis

28

Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multicenter evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomenon known as B(0) eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B(0) eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. This result suggests that correcting short-time B(0) eddy currents that do not affect conventional clinical sequences may simplify the adoption of non-Cartesian methods. Magn Reson Med, 2013. © 2012 Wiley Periodicals, Inc.

Concepts: Measurement, Wave, IMAGE, Resonance, Image scanner, Eddy current, Scanners