Concept: Frame of reference
Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator  but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories.
When using our arms to interact with the world, unintended body motion can introduce movement error. A mechanism which could detect and compensate for such motion would be beneficial. Observations of arm movements evoked by vestibular stimulation provide some support for this mechanism. However, the physiological function underlying these artificially-evoked movements is unclear from previous research. For such a mechanism to be functional, it should only operate when the arm is being controlled in an earth-fixed rather than body-fixed reference frame. In the latter case, compensation would be unnecessary and even deleterious. To test this hypothesis, subjects were gently rotated in a chair while asked to maintain their outstretched arm pointing either towards earth-fixed (EF) or body-fixed (BF) memorised targets. Galvanic vestibular stimulation (GVS) was applied concurrently during rotation to isolate the influence of vestibular input, uncontaminated by inertial factors. During the EF task, GVS produced large polarity-dependent corrections in arm position. These corrections mimicked those evoked when chair velocity was altered without any GVS, indicating a compensatory arm response to a sensation of altered body motion. In stark contrast, corrections were completely absent during the BF task, despite the same chair movement profile and arm posture. These effects persisted when we controlled for differences in limb kinematics between the two tasks. Our results demonstrate that vestibular control of the upper-limb maintains reaching accuracy during unpredictable body motion. The observation that such responses only occurred when reaching within an EF reference frame confirms the functional nature of vestibular-evoked arm movement. This article is protected by copyright. All rights reserved.
This paper presents a vehicle autonomous localization method in local area of coal mine tunnel based on vision sensors and ultrasonic sensors. Barcode tags are deployed in pairs on both sides of the tunnel walls at certain intervals as artificial landmarks. The barcode coding is designed based on UPC-A code. The global coordinates of the upper left inner corner point of the feature frame of each barcode tag deployed in the tunnel are uniquely represented by the barcode. Two on-board vision sensors are used to recognize each pair of barcode tags on both sides of the tunnel walls. The distance between the upper left inner corner point of the feature frame of each barcode tag and the vehicle center point can be determined by using a visual distance projection model. The on-board ultrasonic sensors are used to measure the distance from the vehicle center point to the left side of the tunnel walls. Once the spatial geometric relationship between the barcode tags and the vehicle center point is established, the 3D coordinates of the vehicle center point in the tunnel’s global coordinate system can be calculated. Experiments on a straight corridor and an underground tunnel have shown that the proposed vehicle autonomous localization method is not only able to quickly recognize the barcode tags affixed to the tunnel walls, but also has relatively small average localization errors in the vehicle center point’s plane and vertical coordinates to meet autonomous unmanned vehicle positioning requirements in local area of coal mine tunnel.
An experimental investigation of near field aerodynamics of wind dispersed rotary seeds has been performed using stereoscopic digital particle image velocimetry (DPIV). The detailed three-dimensional (3D) flow structure of the leading-edge vortex (LEV) of autorotating Mahogany seeds (Swietenia macrophylla) in a low-speed vertical wind tunnel are revealed for the first time. The results confirm that the presence of strong spanwise flow and strain produced by centrifugal forces through a spiral vortex are responsible for the attachment and stability of the LEV, with its core forming a cone pattern with gradual increase in vortex size. The LEV appears at 25% of the wingspan, increases in size and strength outboard along the wing, and reaches its maximum stability and spanwise velocity at 75% of the wingspan. At a region between 90% and 100% of the wingspan, the strength and stability of the vortex core decreases and the LEV re-orientation/inflection with the tip vortex takes place. In this study, the instantaneous flow structure and the instantaneous velocity and vorticity fields measured in planes parallel to the free stream direction are presented as contour plots using an inertial and a non-inertial frame of reference. Results for the mean aerodynamic thrust coefficients as a function of the Reynolds number are presented to supplement the DPIV data.
Manipulation of hand posture, such as crossing the hands, has been frequently used to study how the body and its immediately surrounding space are represented in the brain. Abundant data show that crossed arms posture impairs remapping of tactile stimuli from somatotopic to external space reference frame and deteriorates performance on several tactile processing tasks. Here we investigated how impaired tactile remapping affects the illusory self-touch, induced by the non-visual variant of the rubber hand illusion (RHI) paradigm. In this paradigm blindfolded participants (Experiment 1) had their hands either uncrossed or crossed over the body midline. The strength of illusory self-touch was measured with questionnaire ratings and proprioceptive drift. Our results showed that, during synchronous tactile stimulation, the strength of illusory self-touch increased when hands were crossed compared to the uncrossed posture. Follow-up experiments showed that the increase in illusion strength was not related to unfamiliar hand position (Experiment 2) and that it was equally strengthened regardless of where in the peripersonal space the hands were crossed (Experiment 3). However, while the boosting effect of crossing the hands was evident from subjective ratings, the proprioceptive drift was not modulated by crossed posture. Finally, in contrast to the illusion increase in the non-visual RHI, the crossed hand postures did not alter illusory ownership or proprioceptive drift in the classical, visuo-tactile version of RHI (Experiment 4). We argue that the increase in illusory self-touch is related to misalignment of somatotopic and external reference frames and consequently inadequate tactile-proprioceptive integration, leading to re-weighting of the tactile and proprioceptive signals.The present study not only shows that illusory self-touch can be induced by crossing the hands, but importantly, that this posture is associated with a stronger illusion.
- IEEE transactions on visualization and computer graphics
- Published about 4 years ago
We present a method for large-scale geo-localization and global tracking of mobile devices in urban outdoor environments. In contrast to existing methods, we instantaneously initialize and globally register a SLAM map by localizing the first keyframe with respect to widely available untextured 2.5D maps. Given a single image frame and a coarse sensor pose prior, our localization method estimates the absolute camera orientation from straight line segments and the translation by aligning the city map model with a semantic segmentation of the image. We use the resulting 6DOF pose, together with information inferred from the city map model, to reliably initialize and extend a 3D SLAM map in a global coordinate system, applying a model-supported SLAM mapping approach. We show the robustness and accuracy of our localization approach on a challenging dataset, and demonstrate unconstrained global SLAM mapping and tracking of arbitrary camera motion on several sequences.
This paper presents a robust and fully automatic filter-based approach for retinal vessel segmentation. We propose new filters based on 3D rotating frames in so-called orientation scores, which are functions on the Lie-group domain of positions and orientations R2oS1. By means of a wavelet-type transform, a 2D image is lifted to a 3D orientation score, where elongated structures are disentangled into their corresponding orientation planes. In the lifted domain R2 o S1, vessels are enhanced by means of multi-scale second-order Gaussian derivatives perpendicular to the line structures. More precisely, we use a leftinvariant rotating derivative (LID) frame, and a locally adaptive derivative (LAD) frame. The LAD is adaptive to the local line structures and is found by eigensystem analysis of the leftinvariant Hessian matrix (computed with the LID). After multiscale filtering via the LID or LAD in the orientation score domain, the results are projected back to the 2D image plane giving us the enhanced vessels. Then a binary segmentation is obtained through thresholding. The proposed methods are validated on six retinal image datasets with different image types, on which competitive segmentation performances are achieved. In particular, the proposed algorithm of applying the LAD filter on orientation scores (LAD-OS) outperforms most of the state-ofthe- art methods. The LAD-OS is capable of dealing with typically difficult cases like crossings, central arterial reflex, closely parallel and tiny vessels. The high computational speed of the proposed methods allows processing of large datasets in a screening setting.
- The British journal of psychiatry : the journal of mental science
- Published about 4 years ago
BackgroundService user (patient) involvement in care planning is a principle enshrined by mental health policy yet often attracts criticism from patients and carers in practice.AimsTo examine how user-involved care planning is operationalised within mental health services and to establish where, how and why challenges to service user involvement occur.MethodSystematic evidence synthesis.ResultsSynthesis of data from 117 studies suggests that service user involvement fails because the patients' frame of reference diverges from that of providers. Service users and carers attributed highest value to the relational aspects of care planning. Health professionals inconsistently acknowledged the quality of the care planning process, tending instead to define service user involvement in terms of quantifiable service-led outcomes.ConclusionsService user-involved care planning is typically operationalised as a series of practice-based activities compliant with auditor standards. Meaningful involvement demands new patient-centred definitions of care planning quality. New organisational initiatives should validate time spent with service users and display more tangible and flexible commitments to meeting their needs.
Neurophysiological studies focus on memory retrieval as a reproduction of what was experienced and have established that neural discharge is replayed to express memory. However, cognitive psychology has established that recollection is not a verbatim replay of stored information. Recollection is constructive, the product of memory retrieval cues, the information stored in memory, and the subject’s state of mind. We discovered key features of constructive recollection embedded in the rat CA1 ensemble discharge during an active avoidance task. Rats learned two task variants, one with the arena stable, the other with it rotating; each variant defined a distinct behavioral episode. During the rotating episode, the ensemble discharge of CA1 principal neurons was dynamically organized to concurrently represent space in two distinct codes. The code for spatial reference frame switched rapidly between representing the rat’s current location in either the stationary spatial frame of the room or the rotating frame of the arena. The code for task variant switched less frequently between a representation of the current rotating episode and the stable episode from the rat’s past. The characteristics and interplay of these two hippocampal codes revealed three key properties of constructive recollection. (1) Although the ensemble representations of the stable and rotating episodes were distinct, ensemble discharge during rotation occasionally resembled the stable condition, demonstrating cross-episode retrieval of the representation of the remote, stable episode. (2) This cross-episode retrieval at the level of the code for task variant was more likely when the rotating arena was about to match its orientation in the stable episode. (3) The likelihood of cross-episode retrieval was influenced by preretrieval information that was signaled at the level of the code for spatial reference frame. Thus key features of episodic recollection manifest in rat hippocampal representations of space.
The ventral visual pathway in humans and non-human primates is known to represent object information, including shape and identity . Here, we show the ventral pathway also represents scene structure aligned with the gravitational reference frame in which objects move and interact. We analyzed shape tuning of recently described macaque monkey ventral pathway neurons that prefer scene-like stimuli to objects . Individual neurons did not respond to a single shape class, but to a variety of scene elements that are typically aligned with gravity: large planes in the orientation range of ground surfaces under natural viewing conditions, planes in the orientation range of ceilings, and extended convex and concave edges in the orientation range of wall/floor/ceiling junctions. For a given neuron, these elements tended to share a common alignment in eye-centered coordinates. Thus, each neuron integrated information about multiple gravity-aligned structures as they would be seen from a specific eye and head orientation. This eclectic coding strategy provides only ambiguous information about individual structures but explicit information about the environmental reference frame and the orientation of gravity in egocentric coordinates. In the ventral pathway, this could support perceiving and/or predicting physical events involving objects subject to gravity, recognizing object attributes like animacy based on movement not caused by gravity, and/or stabilizing perception of the world against changes in head orientation [3-5]. Our results, like the recent discovery of object weight representation , imply that the ventral pathway is involved not just in recognition, but also in physical understanding of objects and scenes.