SciCombinator

Discover the most talked about and latest scientific content & concepts.

Concept: Camera

250

We have investigated how birds avoid mid-air collisions during head-on encounters. Trajectories of birds flying towards each other in a tunnel were recorded using high speed video cameras. Analysis and modelling of the data suggest two simple strategies for collision avoidance: (a) each bird veers to its right and (b) each bird changes its altitude relative to the other bird according to a preset preference. Both strategies suggest simple rules by which collisions can be avoided in head-on encounters by two agents, be they animals or machines. The findings are potentially applicable to the design of guidance algorithms for automated collision avoidance on aircraft.

Concepts: Animal, Bird, Introductory physics, Projectile, Camera, Parrot, Animals, Traffic Collision Avoidance System

225

Ultrafast video recording of spatiotemporal light distribution in a scattering medium has a significant impact in biomedicine. Although many simulation tools have been implemented to model light propagation in scattering media, existing experimental instruments still lack sufficient imaging speed to record transient light-scattering events in real time. We report single-shot ultrafast video recording of a light-induced photonic Mach cone propagating in an engineered scattering plate assembly. This dynamic light-scattering event was captured in a single camera exposure by lossless-encoding compressed ultrafast photography at 100 billion frames per second. Our experimental results are in excellent agreement with theoretical predictions by time-resolved Monte Carlo simulation. This technology holds great promise for next-generation biomedical imaging instrumentation.

Concepts: Optics, Simulation, Monte Carlo, Monte Carlo method, Monte Carlo methods in finance, Camera, Video, NTSC

166

BACKGROUND: In the last years, several methods and devices have been proposed to record the human mandibular movements, since they provide quantitative parameters that support the diagnosis and treatment of temporomandibular disorders. The techniques currently employed suffer from a number of drawbacks including high price, unnatural to use, lack of support for real-time analysis and mandibular movements recording as a pure rotation. In this paper, we propose a specialized optical motion capture system, which causes a minimum obstruction and can support 3D mandibular movement analysis in real-time. METHODS: We used three infrared cameras together with nine reflective markers that were placed at key points of the face. Some classical techniques are suggested to conduct the camera calibration and three-dimensional reconstruction and we propose some specialized algorithms to automatically recognize our set of markers and track them along a motion capture session. RESULTS: To test the system, we developed a prototype software and performed a clinical experiment in a group of 22 subjects. They were instructed to execute several movements for the functional evaluation of the mandible while the system was employed to record them. The acquired parameters and the reconstructed trajectories were used to confirm the typical function of temporomandibular joint in some subjects and to highlight its abnormal behavior in others. CONCLUSIONS: The proposed system is an alternative to the existing optical, mechanical, electromagnetic and ultrasonic-based methods, and intends to address some drawbacks of currently available solutions. Its main goal is to assist specialists in diagnostic and treatment of temporomandibular disorders, since simple visual inspection may not be sufficient for a precise assessment of temporomandibular joint and associated muscles.

Concepts: Mandible, Joint, Motion capture, Temporomandibular joint, Temporomandibular joint disorder, Camera, Masseteric nerve

106

New Caledonian crows are renowned for their unusually sophisticated tool behaviour. Despite decades of fieldwork, however, very little is known about how they make and use their foraging tools in the wild, which is largely owing to the difficulties in observing these shy forest birds. To obtain first estimates of activity budgets, as well as close-up observations of tool-assisted foraging, we equipped 19 wild crows with self-developed miniature video cameras, yielding more than 10 h of analysable video footage for 10 subjects. While only four crows used tools during recording sessions, they did so extensively: across all 10 birds, we conservatively estimate that tool-related behaviour occurred in 3% of total observation time, and accounted for 19% of all foraging behaviour. Our video-loggers provided first footage of crows manufacturing, and using, one of their most complex tool types-hooked stick tools-under completely natural foraging conditions. We recorded manufacture from live branches of paperbark (Melaleuca sp.) and another tree species (thought to be Acacia spirorbis), and deployment of tools in a range of contexts, including on the forest floor. Taken together, our video recordings reveal an ‘expanded’ foraging niche for hooked stick tools, and highlight more generally how crows routinely switch between tool- and bill-assisted foraging.

Concepts: Scientific method, Observation, Recording, Hypothesis, Manufacturing, Media technology, Camera, Tool

58

Two studies examined whether photographing objects impacts what is remembered about them. Participants were led on a guided tour of an art museum and were directed to observe some objects and to photograph others. Results showed a photo-taking-impairment effect: If participants took a photo of each object as a whole, they remembered fewer objects and remembered fewer details about the objects and the objects' locations in the museum than if they instead only observed the objects and did not photograph them. However, when participants zoomed in to photograph a specific part of the object, their subsequent recognition and detail memory was not impaired, and, in fact, memory for features that were not zoomed in on was just as strong as memory for features that were zoomed in on. This finding highlights key differences between people’s memory and the camera’s “memory” and suggests that the additional attentional and cognitive processes engaged by this focused activity can eliminate the photo-taking-impairment effect.

Concepts: Psychology, Cognitive psychology, Cognition, Memory, Object, Photography, Camera, Photograph

37

Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2-4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.

Concepts: Psychology, Visual perception, Prosthetics, Book of Optics, Wearable computer, Blindness, Camera, Braille

35

Pokémon GO is a location-based augmented reality game. Using GPS and the camera on a smartphone, the game requires players to travel in real world to capture animated creatures, called Pokémon. We examined the impact of Pokémon GO on physical activity (PA).

Concepts: Mind, Simulated reality, Reality, Virtual reality, Universe, 2000s American television series, Camera, Augmented reality

28

The accurate evaluation of crash causal factors can provide fundamental information for effective transportation policy, vehicle design, and driver education. Naturalistic driving (ND) data collected with multiple onboard video cameras and sensors provide a unique opportunity to evaluate risk factors during the seconds leading up to a crash. This paper uses a National Academy of Sciences-sponsored ND dataset comprising 905 injurious and property damage crash events, the magnitude of which allows the first direct analysis (to our knowledge) of causal factors using crashes only. The results show that crash causation has shifted dramatically in recent years, with driver-related factors (i.e., error, impairment, fatigue, and distraction) present in almost 90% of crashes. The results also definitively show that distraction is detrimental to driver safety, with handheld electronic devices having high use rates and risk.

Concepts: Data, Electronics, Driving, Crash, Driver's license, Driver's education, Camera, Crashing

27

Gait recognition can potentially provide a noninvasive and effective biometric authentication from a distance. However, the performance of gait recognition systems will suffer in real surveillance scenarios with multiple interacting individuals and where the camera is usually placed at a significant angle and distance from the floor. We present a methodology for view-invariant monocular 3-D human pose tracking in man-made environments in which we assume that observed people move on a known ground plane. First, we model 3-D body poses and camera viewpoints with a low dimensional manifold and learn a generative model of the silhouette from this manifold to a reduced set of training views. During the online stage, 3-D body poses are tracked using recursive Bayesian sampling conducted jointly over the scene’s ground plane and the pose-viewpoint manifold. For each sample, the homography that relates the corresponding training plane to the image points is calculated using the dominant 3-D directions of the scene, the sampled location on the ground plane and the sampled camera view. Each regressed silhouette shape is projected using this homographic transformation and is matched in the image to estimate its likelihood. Our framework is able to track 3-D human walking poses in a 3-D environment exploring only a 4-D state space with success. In our experimental evaluation, we demonstrate the significant improvements of the homographic alignment over a commonly used similarity transformation and provide quantitative pose tracking results for the monocular sequences with a high perspective effect from the CAVIAR dataset.

Concepts: Dimension, Manifold, Track and field athletics, Camera, Gait analysis, Facial recognition system, Biometrics, Möbius transformation

26

Diabetic foot ulcers represent a significant health issue. Currently, clinicians and nurses mainly base their wound assessment on visual examination of wound size and healing status, while the patients themselves seldom have an opportunity to play an active role. Hence, a more quantitative and cost-effective examination method that enables the patients and their caregivers to take a more active role in daily wound care potentially can accelerate wound healing, save travel cost and reduce healthcare expenses. Considering the prevalence of smartphones with a high resolution digital camera, assessing wounds by analyzing images of chronic foot ulcers is an attractive option. In this paper, we propose a novel wound image analysis system implemented solely on the Android smartphone. The wound image is captured by the camera on the smartphone with the assistance of an image capture box. After that, the smartphone performs wound segmentation by applying the accelerated mean shift algorithm. Specifically, the outline of the foot is determined based on skin color, and the wound boundary is found using a simple connected region detection method. Within the wound boundary, the healing status is next assessed based on red-yellow-black color evaluation model. Moreover, the healing status is quantitatively assessed, based on trend analysis of time records for a given patient. Experimental results on wound images collected in UMASS -Memorial Health Center Wound Clinic (Worcester, MA) following an IRB (Institutional Review Board) approved protocol show that our system can be efficiently used to analyze the wound healing status with promising accuracy.

Concepts: Wound healing, Sociology, Wound, Institutional review board, Camera, Windows Mobile, Diabetic foot, Digital cameras