Concept: Photographic lens
We demonstrate a new optical approach to generate high-frequency (>15 MHz) and high-amplitude focused ultrasound, which can be used for non-invasive ultrasound therapy. A nano-composite film of carbon nanotubes (CNTs) and elastomeric polymer is formed on concave lenses, and used as an efficient optoacoustic source due to the high optical absorption of the CNTs and rapid heat transfer to the polymer upon excitation by pulsed laser irradiation. The CNT-coated lenses can generate unprecedented optoacoustic pressures of >50 MPa in peak positive on a tight focal spot of 75 μm in lateral and 400 μm in axial widths. This pressure amplitude is remarkably high in this frequency regime, producing pronounced shock effects and non-thermal pulsed cavitation at the focal zone. We demonstrate that the optoacoustic lens can be used for micro-scale ultrasonic fragmentation of solid materials and a single-cell surgery in terms of removing the cells from substrates and neighboring cells.
Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.
We exploit the inherent dispersion in diffractive optics to demonstrate planar chromatic-aberration-corrected lenses. Specifically, we designed, fabricated and characterized cylindrical diffractive lenses that efficiently focus the entire visible band (450 nm to 700 nm) onto a single line. These devices are essentially pixelated, multi-level microstructures. Experiments confirm an average optical efficiency of 25% for a three-wavelength apochromatic lens whose chromatic focus shift is only 1.3 μm and 25 μm in the lateral and axial directions, respectively. Super-achromatic performance over the continuous visible band is also demonstrated with averaged lateral and axial focus shifts of only 1.65 μm and 73.6 μm, respectively. These lenses are easy to fabricate using single-step grayscale lithography and can be inexpensively replicated. Furthermore, these devices are thin (<3 μm), error tolerant, has low aspect ratio (<1:1) and offer polarization-insensitive focusing, all significant advantages compared to alternatives that rely on metasurfaces. Our design methodology offers high design flexibility in numerical aperture and focal length, and is readily extended to 2D.
Greek ladders with diffraction-limited array foci provide a probability to realize array imaging with equal intensity. Here, taking the ancient Theon sequence as an example, we design the optical structure and have measured the focusing properties by digital holography. Then, we verify the multiplanar imaging with different magnifications by experiment. The experimental results agree well with the theoretical analysis. In addition, bi-Fourier planes filtering technology is proposed to solve the problem of crosstalk between different imaging planes to further improve the imaging resolution. Therefore, we can freely design the focal length of the bifocal lens to achieve high-quality imaging at different resolutions. As a kind of amplitude-only diffractive lens, multifocal imaging provides a possibility of application in array biological imaging, ophthalmology, and an optical zoom system.
Spatially structured optical fields have been used to enhance the functionality of a wide variety of systems that use light for sensing or information transfer. As higher-dimensional modes become a solution of choice in optical systems, it is important to develop channel models that suitably predict the effect of atmospheric turbulence on these modes. We investigate the propagation of a set of orthogonal spatial modes across a free-space channel between two buildings separated by 1.6 km. Given the circular geometry of a common optical lens, the orthogonal mode set we choose to implement is that described by the Laguerre-Gaussian (LG) field equations. Our study focuses on the preservation of phase purity, which is vital for spatial multiplexing and any system requiring full quantum-state tomography. We present experimental data for the modal degradation in a real urban environment and draw a comparison to recognized theoretical predictions of the link. Our findings indicate that adaptations to channel models are required to simulate the effects of atmospheric turbulence placed on high-dimensional structured modes that propagate over a long distance. Our study indicates that with mitigation of vortex splitting, potentially through precorrection techniques, one could overcome the challenges in a real point-to-point free-space channel in an urban environment.
Cellphones equipped with high-quality cameras and powerful CPUs as well as GPUs are widespread. This opens new prospects to use such existing computational and imaging resources to perform medical diagnosis in developing countries at a very low cost. Many relevant samples, like biological cells or waterborn parasites, are almost fully transparent. As they do not exhibit absorption, but alter the light’s phase only, they are almost invisible in brightfield microscopy. Expensive equipment and procedures for microscopic contrasting or sample staining often are not available. Dedicated illumination approaches, tailored to the sample under investigation help to boost the contrast. This is achieved by a programmable illumination source, which also allows to measure the phase gradient using the differential phase contrast (DPC) [1, 2] or even the quantitative phase using the derived qDPC approach . By applying machine-learning techniques, such as a convolutional neural network (CNN), it is possible to learn a relationship between samples to be examined and its optimal light source shapes, in order to increase e.g. phase contrast, from a given dataset to enable real-time applications. For the experimental setup, we developed a 3D-printed smartphone microscope for less than 100 $ using off-the-shelf components only such as a low-cost video projector. The fully automated system assures true Koehler illumination with an LCD as the condenser aperture and a reversed smartphone lens as the microscope objective. We show that the effect of a varied light source shape, using the pre-trained CNN, does not only improve the phase contrast, but also the impression of an improvement in optical resolution without adding any special optics, as demonstrated by measurements.
We demonstrate a digital sensing platform, termed Albumin Tester, running on a smart-phone that images and automatically analyses fluorescent assays confined within disposable test tubes for sensitive and specific detection of albumin in urine. This light-weight and compact Albumin Tester attachment, weighing approximately 148 grams, is mechanically installed on the existing camera unit of a smart-phone, where test and control tubes are inserted from the side and are excited by a battery powered laser diode. This excitation beam, after probing the sample of interest located within the test tube, interacts with the control tube, and the resulting fluorescent emission is collected perpendicular to the direction of the excitation, where the cellphone camera captures the images of the fluorescent tubes through the use of an external plastic lens that is inserted between the sample and the camera lens. The acquired fluorescent images of the sample and control tubes are digitally processed within one second through an Android application running on the same cellphone for quantification of albumin concentration in the urine specimen of interest. Using a simple sample preparation approach which takes ∼5 min per test (including the incubation time), we experimentally confirmed the detection limit of our sensing platform as 5-10 μg mL(-1) (which is more than 3 times lower than the clinically accepted normal range) in buffer as well as urine samples. This automated albumin testing tool running on a smart-phone could be useful for early diagnosis of kidney disease or for monitoring of chronic patients, especially those suffering from diabetes, hypertension, and/or cardiovascular diseases.
Pixel count is the ratio of the solid angle within a camera’s field of view to the solid angle covered by a single detector element. Because the size of the smallest resolvable pixel is proportional to aperture diameter and the maximum field of view is scale independent, the diffraction-limited pixel count is proportional to aperture area. At present, digital cameras operate near the fundamental limit of 1-10 megapixels for millimetre-scale apertures, but few approach the corresponding limits of 1-100 gigapixels for centimetre-scale apertures. Barriers to high-pixel-count imaging include scale-dependent geometric aberrations, the cost and complexity of gigapixel sensor arrays, and the computational and communications challenge of gigapixel image management. Here we describe the AWARE-2 camera, which uses a 16-mm entrance aperture to capture snapshot, one-gigapixel images at three frames per minute. AWARE-2 uses a parallel array of microcameras to reduce the problems of gigapixel imaging to those of megapixel imaging, which are more tractable. In cameras of conventional design, lens speed and field of view decrease as lens scale increases, but with the experimental system described here we confirm previous theoretical results suggesting that lens speed and field of view can be scale independent in microcamera-based imagers resolving up to 50 gigapixels. Ubiquitous gigapixel cameras may transform the central challenge of photography from the question of where to point the camera to that of how to mine the data.
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
A planar metalens for achieving super-resolution imaging in far-field is proposed. This metalens, which has a non-sub-wavelength feature size, can be fabricated by conventional laser pattern generator. The imaging process is purely physical and captured in real time, without any pre- and post-processing.