We have investigated how birds avoid mid-air collisions during head-on encounters. Trajectories of birds flying towards each other in a tunnel were recorded using high speed video cameras. Analysis and modelling of the data suggest two simple strategies for collision avoidance: (a) each bird veers to its right and (b) each bird changes its altitude relative to the other bird according to a preset preference. Both strategies suggest simple rules by which collisions can be avoided in head-on encounters by two agents, be they animals or machines. The findings are potentially applicable to the design of guidance algorithms for automated collision avoidance on aircraft.
Ultrafast video recording of spatiotemporal light distribution in a scattering medium has a significant impact in biomedicine. Although many simulation tools have been implemented to model light propagation in scattering media, existing experimental instruments still lack sufficient imaging speed to record transient light-scattering events in real time. We report single-shot ultrafast video recording of a light-induced photonic Mach cone propagating in an engineered scattering plate assembly. This dynamic light-scattering event was captured in a single camera exposure by lossless-encoding compressed ultrafast photography at 100 billion frames per second. Our experimental results are in excellent agreement with theoretical predictions by time-resolved Monte Carlo simulation. This technology holds great promise for next-generation biomedical imaging instrumentation.
BACKGROUND: In the last years, several methods and devices have been proposed to record the human mandibular movements, since they provide quantitative parameters that support the diagnosis and treatment of temporomandibular disorders. The techniques currently employed suffer from a number of drawbacks including high price, unnatural to use, lack of support for real-time analysis and mandibular movements recording as a pure rotation. In this paper, we propose a specialized optical motion capture system, which causes a minimum obstruction and can support 3D mandibular movement analysis in real-time. METHODS: We used three infrared cameras together with nine reflective markers that were placed at key points of the face. Some classical techniques are suggested to conduct the camera calibration and three-dimensional reconstruction and we propose some specialized algorithms to automatically recognize our set of markers and track them along a motion capture session. RESULTS: To test the system, we developed a prototype software and performed a clinical experiment in a group of 22 subjects. They were instructed to execute several movements for the functional evaluation of the mandible while the system was employed to record them. The acquired parameters and the reconstructed trajectories were used to confirm the typical function of temporomandibular joint in some subjects and to highlight its abnormal behavior in others. CONCLUSIONS: The proposed system is an alternative to the existing optical, mechanical, electromagnetic and ultrasonic-based methods, and intends to address some drawbacks of currently available solutions. Its main goal is to assist specialists in diagnostic and treatment of temporomandibular disorders, since simple visual inspection may not be sufficient for a precise assessment of temporomandibular joint and associated muscles.
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.
New Caledonian crows are renowned for their unusually sophisticated tool behaviour. Despite decades of fieldwork, however, very little is known about how they make and use their foraging tools in the wild, which is largely owing to the difficulties in observing these shy forest birds. To obtain first estimates of activity budgets, as well as close-up observations of tool-assisted foraging, we equipped 19 wild crows with self-developed miniature video cameras, yielding more than 10 h of analysable video footage for 10 subjects. While only four crows used tools during recording sessions, they did so extensively: across all 10 birds, we conservatively estimate that tool-related behaviour occurred in 3% of total observation time, and accounted for 19% of all foraging behaviour. Our video-loggers provided first footage of crows manufacturing, and using, one of their most complex tool types-hooked stick tools-under completely natural foraging conditions. We recorded manufacture from live branches of paperbark (Melaleuca sp.) and another tree species (thought to be Acacia spirorbis), and deployment of tools in a range of contexts, including on the forest floor. Taken together, our video recordings reveal an ‘expanded’ foraging niche for hooked stick tools, and highlight more generally how crows routinely switch between tool- and bill-assisted foraging.
Ingestion of artificial debris is considered as a significant stress for wildlife including sea turtles. To investigate how turtles react to artificial debris under natural conditions, we deployed animal-borne video cameras on loggerhead and green turtles in addition to feces and gut contents analyses from 2007 to 2015. Frequency of occurrences of artificial debris in feces and gut contents collected from loggerhead turtles were 35.7% (10/28) and 84.6% (11/13), respectively. Artificial debris appeared in all green turtles in feces (25/25) and gut contents (10/10), and green turtles ingested more debris (feces; 15.8 ± 33.4 g, gut; 39.8 ± 51.2 g) than loggerhead turtles (feces; 1.6 ± 3.7 g, gut; 9.7 ± 15.0 g). In the video records (60 and 52.5 hours from 10 loggerhead and 6 green turtles, respectively), turtles encountered 46 artificial debris and ingested 23 of them. The encounter-ingestion ratio of artificial debris in green turtles (61.8%) was significantly higher than that in loggerhead turtles (16.7%). Loggerhead turtles frequently fed on gelatinous prey (78/84), however, green turtles mainly fed marine algae (156/210), and partly consumed gelatinous prey (10/210). Turtles seemed to confuse solo drifting debris with their diet, and omnivorous green turtles were more attracted by artificial debris.
Two studies examined whether photographing objects impacts what is remembered about them. Participants were led on a guided tour of an art museum and were directed to observe some objects and to photograph others. Results showed a photo-taking-impairment effect: If participants took a photo of each object as a whole, they remembered fewer objects and remembered fewer details about the objects and the objects' locations in the museum than if they instead only observed the objects and did not photograph them. However, when participants zoomed in to photograph a specific part of the object, their subsequent recognition and detail memory was not impaired, and, in fact, memory for features that were not zoomed in on was just as strong as memory for features that were zoomed in on. This finding highlights key differences between people’s memory and the camera’s “memory” and suggests that the additional attentional and cognitive processes engaged by this focused activity can eliminate the photo-taking-impairment effect.
Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2-4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.
Pokémon GO is a location-based augmented reality game. Using GPS and the camera on a smartphone, the game requires players to travel in real world to capture animated creatures, called Pokémon. We examined the impact of Pokémon GO on physical activity (PA).
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 3 years ago
The accurate evaluation of crash causal factors can provide fundamental information for effective transportation policy, vehicle design, and driver education. Naturalistic driving (ND) data collected with multiple onboard video cameras and sensors provide a unique opportunity to evaluate risk factors during the seconds leading up to a crash. This paper uses a National Academy of Sciences-sponsored ND dataset comprising 905 injurious and property damage crash events, the magnitude of which allows the first direct analysis (to our knowledge) of causal factors using crashes only. The results show that crash causation has shifted dramatically in recent years, with driver-related factors (i.e., error, impairment, fatigue, and distraction) present in almost 90% of crashes. The results also definitively show that distraction is detrimental to driver safety, with handheld electronic devices having high use rates and risk.