Concept: Photographic lens
We demonstrate a new optical approach to generate high-frequency (>15 MHz) and high-amplitude focused ultrasound, which can be used for non-invasive ultrasound therapy. A nano-composite film of carbon nanotubes (CNTs) and elastomeric polymer is formed on concave lenses, and used as an efficient optoacoustic source due to the high optical absorption of the CNTs and rapid heat transfer to the polymer upon excitation by pulsed laser irradiation. The CNT-coated lenses can generate unprecedented optoacoustic pressures of >50 MPa in peak positive on a tight focal spot of 75 μm in lateral and 400 μm in axial widths. This pressure amplitude is remarkably high in this frequency regime, producing pronounced shock effects and non-thermal pulsed cavitation at the focal zone. We demonstrate that the optoacoustic lens can be used for micro-scale ultrasonic fragmentation of solid materials and a single-cell surgery in terms of removing the cells from substrates and neighboring cells.
Focal adjustment and zooming are universal features of cameras and advanced optical systems. Such tuning is usually performed longitudinally along the optical axis by mechanical or electrical control of focal length. However, the recent advent of ultrathin planar lenses based on metasurfaces (metalenses), which opens the door to future drastic miniaturization of mobile devices such as cell phones and wearable displays, mandates fundamentally different forms of tuning based on lateral motion rather than longitudinal motion. Theory shows that the strain field of a metalens substrate can be directly mapped into the outgoing optical wavefront to achieve large diffraction-limited focal length tuning and control of aberrations. We demonstrate electrically tunable large-area metalenses controlled by artificial muscles capable of simultaneously performing focal length tuning (>100%) as well as on-the-fly astigmatism and image shift corrections, which until now were only possible in electron optics. The device thickness is only 30 μm. Our results demonstrate the possibility of future optical microscopes that fully operate electronically, as well as compact optical systems that use the principles of adaptive optics to correct many orders of aberrations simultaneously.
We exploit the inherent dispersion in diffractive optics to demonstrate planar chromatic-aberration-corrected lenses. Specifically, we designed, fabricated and characterized cylindrical diffractive lenses that efficiently focus the entire visible band (450 nm to 700 nm) onto a single line. These devices are essentially pixelated, multi-level microstructures. Experiments confirm an average optical efficiency of 25% for a three-wavelength apochromatic lens whose chromatic focus shift is only 1.3 μm and 25 μm in the lateral and axial directions, respectively. Super-achromatic performance over the continuous visible band is also demonstrated with averaged lateral and axial focus shifts of only 1.65 μm and 73.6 μm, respectively. These lenses are easy to fabricate using single-step grayscale lithography and can be inexpensively replicated. Furthermore, these devices are thin (<3 μm), error tolerant, has low aspect ratio (<1:1) and offer polarization-insensitive focusing, all significant advantages compared to alternatives that rely on metasurfaces. Our design methodology offers high design flexibility in numerical aperture and focal length, and is readily extended to 2D.
Spatially structured optical fields have been used to enhance the functionality of a wide variety of systems that use light for sensing or information transfer. As higher-dimensional modes become a solution of choice in optical systems, it is important to develop channel models that suitably predict the effect of atmospheric turbulence on these modes. We investigate the propagation of a set of orthogonal spatial modes across a free-space channel between two buildings separated by 1.6 km. Given the circular geometry of a common optical lens, the orthogonal mode set we choose to implement is that described by the Laguerre-Gaussian (LG) field equations. Our study focuses on the preservation of phase purity, which is vital for spatial multiplexing and any system requiring full quantum-state tomography. We present experimental data for the modal degradation in a real urban environment and draw a comparison to recognized theoretical predictions of the link. Our findings indicate that adaptations to channel models are required to simulate the effects of atmospheric turbulence placed on high-dimensional structured modes that propagate over a long distance. Our study indicates that with mitigation of vortex splitting, potentially through precorrection techniques, one could overcome the challenges in a real point-to-point free-space channel in an urban environment.
Cellphones equipped with high-quality cameras and powerful CPUs as well as GPUs are widespread. This opens new prospects to use such existing computational and imaging resources to perform medical diagnosis in developing countries at a very low cost. Many relevant samples, like biological cells or waterborn parasites, are almost fully transparent. As they do not exhibit absorption, but alter the light’s phase only, they are almost invisible in brightfield microscopy. Expensive equipment and procedures for microscopic contrasting or sample staining often are not available. Dedicated illumination approaches, tailored to the sample under investigation help to boost the contrast. This is achieved by a programmable illumination source, which also allows to measure the phase gradient using the differential phase contrast (DPC) [1, 2] or even the quantitative phase using the derived qDPC approach . By applying machine-learning techniques, such as a convolutional neural network (CNN), it is possible to learn a relationship between samples to be examined and its optimal light source shapes, in order to increase e.g. phase contrast, from a given dataset to enable real-time applications. For the experimental setup, we developed a 3D-printed smartphone microscope for less than 100 $ using off-the-shelf components only such as a low-cost video projector. The fully automated system assures true Koehler illumination with an LCD as the condenser aperture and a reversed smartphone lens as the microscope objective. We show that the effect of a varied light source shape, using the pre-trained CNN, does not only improve the phase contrast, but also the impression of an improvement in optical resolution without adding any special optics, as demonstrated by measurements.
We demonstrate a digital sensing platform, termed Albumin Tester, running on a smart-phone that images and automatically analyses fluorescent assays confined within disposable test tubes for sensitive and specific detection of albumin in urine. This light-weight and compact Albumin Tester attachment, weighing approximately 148 grams, is mechanically installed on the existing camera unit of a smart-phone, where test and control tubes are inserted from the side and are excited by a battery powered laser diode. This excitation beam, after probing the sample of interest located within the test tube, interacts with the control tube, and the resulting fluorescent emission is collected perpendicular to the direction of the excitation, where the cellphone camera captures the images of the fluorescent tubes through the use of an external plastic lens that is inserted between the sample and the camera lens. The acquired fluorescent images of the sample and control tubes are digitally processed within one second through an Android application running on the same cellphone for quantification of albumin concentration in the urine specimen of interest. Using a simple sample preparation approach which takes ∼5 min per test (including the incubation time), we experimentally confirmed the detection limit of our sensing platform as 5-10 μg mL(-1) (which is more than 3 times lower than the clinically accepted normal range) in buffer as well as urine samples. This automated albumin testing tool running on a smart-phone could be useful for early diagnosis of kidney disease or for monitoring of chronic patients, especially those suffering from diabetes, hypertension, and/or cardiovascular diseases.
Pixel count is the ratio of the solid angle within a camera’s field of view to the solid angle covered by a single detector element. Because the size of the smallest resolvable pixel is proportional to aperture diameter and the maximum field of view is scale independent, the diffraction-limited pixel count is proportional to aperture area. At present, digital cameras operate near the fundamental limit of 1-10 megapixels for millimetre-scale apertures, but few approach the corresponding limits of 1-100 gigapixels for centimetre-scale apertures. Barriers to high-pixel-count imaging include scale-dependent geometric aberrations, the cost and complexity of gigapixel sensor arrays, and the computational and communications challenge of gigapixel image management. Here we describe the AWARE-2 camera, which uses a 16-mm entrance aperture to capture snapshot, one-gigapixel images at three frames per minute. AWARE-2 uses a parallel array of microcameras to reduce the problems of gigapixel imaging to those of megapixel imaging, which are more tractable. In cameras of conventional design, lens speed and field of view decrease as lens scale increases, but with the experimental system described here we confirm previous theoretical results suggesting that lens speed and field of view can be scale independent in microcamera-based imagers resolving up to 50 gigapixels. Ubiquitous gigapixel cameras may transform the central challenge of photography from the question of where to point the camera to that of how to mine the data.
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
A planar metalens for achieving super-resolution imaging in far-field is proposed. This metalens, which has a non-sub-wavelength feature size, can be fabricated by conventional laser pattern generator. The imaging process is purely physical and captured in real time, without any pre- and post-processing.
Information on contacts between individuals within a population is crucial to inform disease control strategies, via parameterisation of disease spread models. In this study we investigated the use of dog-borne video cameras-in conjunction with global positioning systems (GPS) loggers-to both characterise dog-to-dog contacts and to estimate contact rates. We customized miniaturised video cameras, enclosed within 3D-printed plastic cases, and attached these to nylon dog collars. Using two 3400 mAh NCR lithium Li-ion batteries, cameras could record a maximum of 22 hr of continuous video footage. Together with a GPS logger, collars were attached to six free roaming domestic dogs (FRDDs) in two remote Indigenous communities in northern Australia. We recorded a total of 97 hr of video footage, ranging from 4.5 to 22 hr (mean 19.1) per dog, and observed a wide range of social behaviours. The majority (69%) of all observed interactions between community dogs involved direct physical contact. Direct contact behaviours included sniffing, licking, mouthing and play fighting. No contacts appeared to be aggressive, however multiple teeth baring incidents were observed during play fights. We identified a total of 153 contacts-equating to 8 to 147 contacts per dog per 24 hr-from the videos of the five dogs with camera data that could be analysed. These contacts were attributed to 42 unique dogs (range 1 to 19 per video) which could be identified (based on colour patterns and markings). Most dog activity was observed in urban (houses and roads) environments, but contacts were more common in bushland and beach environments. A variety of foraging behaviours were observed, included scavenging through rubbish and rolling on dead animal carcasses. Identified food consumed included chicken, raw bones, animal carcasses, rubbish, grass and cheese. For characterising contacts between FRDD, several benefits of analysing videos compared to GPS fixes alone were identified in this study, including visualisation of the nature of the contact between two dogs; and inclusion of a greater number of dogs in the study (which do not need to be wearing video or GPS collars). Some limitations identified included visualisation of contacts only during daylight hours; the camera lens being obscured on occasion by the dog’s mandible or the dog resting on the camera; an insufficiently wide viewing angle (36°); battery life and robustness of the deployments; high costs of the deployment; and analysis of large volumes of often unsteady video footage. This study demonstrates that dog-borne video cameras, are a feasible technology for estimating and characterising contacts between FRDDs. Modifying camera specifications and developing new analytical methods will improve applicability of this technology for monitoring FRDD populations, providing insights into dog-to-dog contacts and therefore how disease might spread within these populations.