We investigated puppies' responsiveness to hand points differing in salience. Experiment 1 compared performance of younger (8 weeks old) and older (12 weeks) shelter pups in following pointing gestures. We hypothesized that older puppies would show better performance. Both groups followed the easy and moderate but not the difficult pointing cues. Surprisingly, the younger pups outperformed the older ones in following the moderate and difficult points. Investigation of subjects' backgrounds revealed that significantly more younger pups had experience living in human homes than did the older pups. Thus, we conducted a second experiment to isolate the variable experience. We collected additional data from older pet pups living in human homes on the same three point types and compared their performance with the shelter pups from Experiment 1. The pups living in homes accurately followed all three pointing cues. When comparing both experienced groups, the older pet pups outperformed the younger shelter ones, as predicted. When comparing the two same-age groups differing in background experience, the pups living in homes outperformed the shelter pups. A significant correlation between experience with humans and success in following less salient cues was found. The importance of ontogenetic learning in puppies' responsiveness to certain human social cues is discussed.
- Journal of experimental psychology. Human perception and performance
- Published almost 4 years ago
Pointing gestures are a vital aspect of human communication. Nevertheless, observers consistently fail to determine the exact location to which another person points when that location lies in the distance. Here we explore the reasons for this misunderstanding. Humans usually point by extending the arm and finger. We show that observer’s interpret these gestures by nonlinear extrapolation of the pointer’s arm-finger line. The nonlinearity can be adequately described as the Bayesian-optimal integration of a linear extrapolation of the arm-finger line and observers' prior assumptions about likely referent positions. Surprisingly, the spatial rule describing the interpretation of pointing gestures differed from the rules describing the production of these gestures. In the latter case, the eye, index finger, and referent were aligned. We show that the differences in the production and interpretation of pointing gestures accounts for the systematic spatial misunderstanding of pointing gestures to distant referents. No evidence was found for the hypotheses that action-related processes are involved in the perception of pointing gestures. How participants interpreted pointing gestures was independent of how they produce these gestures and whether they had practiced pointing movements before. By contrast, both the production and interpretation seem to be primarily determined by salient visual cues. (PsycINFO Database Record
In everyday discourse, people describe and point at things, but they also depict things with their hands, arms, head, face, eyes, voice, and body, with and without props. Examples are iconic gestures, facial gestures, quotations of many kinds, full-scale demonstrations, and make-believe play. Depicting, it is argued, is a basic method of communication. It is on a par with describing and pointing, but it works by different principles. The proposal here, called staging theory, is that depictions are physical scenes that people stage for others to use in imagining the scenes they are depicting. Staging a scene is the same type of act that is used by children in make-believe play and by the cast and crew in stage plays. This theory accounts for a diverse set of features of everyday depictions. Although depictions are integral parts of everyday utterances, they are absent from standard models of language processing. To be complete, these models will have to account for depicting as well as describing and pointing. (PsycINFO Database Record
Evidence from the literature indicates that dogs' choices can be influenced by human-delivered social cues, such as pointing, and pointing combined with facial expression, intonation (i.e., rising and falling voice pitch), and/or words. The present study used an object choice task to investigate whether intonation conveys unique information in the absence of other salient cues. We removed facial expression cues and speech information by delivering cues with the experimenter’s back to the dog and by using nonword vocalizations. During each trial, the dog was presented with pairs of the following three vocal cues: Positive (happy-sounding), Negative (sad-sounding), and Breath (neutral control). In Experiment 1, where dogs received only these vocal cue pairings, dogs preferred the Positive intonation, and there was no difference in choice behavior between Negative or Breath. In Experiment 2, we included a point cue with one of the two vocal cues in each pairing. Here, dogs preferred containers receiving pointing cues as well as Negative intonation, and preference was greatest when both of these cues were presented together. Taken together, these findings indicate that dogs can indeed extract information from vocal intonation alone, and may use intonation as a social referencing cue. However, the effect of intonation on behavior appears to be strongly influenced by the presence of pointing, which is known to be a highly salient visual cue for dogs. It is possible that in the presence of a point cue, intonation may shift from informative to instructive.
We compared 24-month-old children’s learning when their exposure to words came either in an interactive (coupled) context or in a nonsocial (decoupled) context. We measured the children’s learning with two different methods: one in which they were asked to point to the referent for the experimenter, and the other a preferential looking task in which they were encouraged to look to the referent. In the pointing test, children chose the correct referents for words encountered in the coupled condition but not in the decoupled condition. In the looking time test, however, they looked to the targets regardless of condition. We explore the explanations for this and propose that the different response measures are reflecting two different kinds of learning.
In this review, we aim at describing the results obtained in the past years on dynamics features defining NF-κB regulatory functions, as we believe that these developments might have a transformative effect on the way in which NF-κB involvement in cancer is studied. We will also describe technical aspects of the studies performed in this context, including the use of different cellular models, culture conditions, microscopy approaches and quantification of the imaging data, balancing their strengths and limitations and pointing out to common features and to some open questions. Our emphasis in the methodology will allow a critical overview of literature and will show how these cutting-edge approaches can contribute to shed light on the involvement of NF-κB deregulation in tumour onset and progression. We hypothesize that this “dynamic point of view” can be fruitfully applied to untangle the complex relationship between NF-κB and cancer and to find new targets to restrain cancer growth.
A laser-induced acoustic desorption source, developed for use at central facilities, such as free-electron lasers, is presented. It features prolonged measurement times and a fixed interaction point. A novel sample deposition method using aerosol spraying provides a uniform sample coverage and hence stable signal intensity. Utilizing strong-field ionization as a universal detection scheme, the produced molecular plume is characterized in terms of number density, spatial extend, fragmentation, temporal distribution, translational velocity, and translational temperature. The effect of desorption laser intensity on these plume properties is evaluated. While translational velocity is invariant for different desorption laser intensities, pointing to a non-thermal desorption mechanism, the translational temperature increases significantly and higher fragmentation is observed with increased desorption laser fluence.
Evidence concerning the representation of space by blind individuals is still unclear, as sometimes blind people behave like sighted people do, while other times they present difficulties. A better understanding of blind people’s difficulties, especially with reference to the strategies used to form the representation of the environment, may help to enhance knowledge of the consequences of the absence of vision. The present study examined the representation of the locations of landmarks of a real town by using pointing tasks that entailed either allocentric points of reference with mental rotations of different degrees, or contra-aligned representations. Results showed that, in general, people met difficulties when they had to point from a different perspective to aligned landmarks or from the original perspective to contra-aligned landmarks, but this difficulty was particularly evident for the blind. The examination of the strategies adopted to perform the tasks showed that only a small group of blind participants used a survey strategy and that this group had a better performance with respect to people who adopted route or verbal strategies. Implications for the comprehension of the consequences on spatial cognition of the absence of visual experience are discussed, focusing in particular on conceivable interventions.
Finger pointing is a natural human behavior frequently used to draw attention to specific parts of sensory input. Since this pointing behavior is likely preceded and/or accompanied by the deployment of attention by the pointing person, we hypothesize that pointing can be used as a natural means of providing self-reports of attention and, in the case of visual input, visual salience. We here introduce a new method for assessing attentional choice by asking subjects to point to and tap the first place they look at on an image appearing on an electronic tablet screen. Our findings show that the tap data are well-correlated with other measures of attention, including eye fixations and selections of interesting image points, as well as with predictions of a saliency map model. We also develop an analysis method for comparing attentional maps (including fixations, reported points of interest, finger pointing, and computed salience) that takes into account the error in estimating those maps from a finite number of data points. This analysis strengthens our original findings by showing that the measured correlation between attentional maps drawn from identical underlying processes is systematically underestimated. The underestimation is strongest when the number of samples is small but it is always present. Our analysis method is not limited to data from attentional paradigms but, instead, it is broadly applicable to measures of similarity made between counts of multinomial data or probability distributions.
Following Event Segmentation Theory (EST) adult memory is enhanced at event boundaries (EB). The present study set out to explore this in infancy. Sixty-eight 21-month-olds watched a cartoon with one of two objects (counterbalanced) inserted for 3s either at EB or between EB. Ten minutes later they watched both objects (familiar and novel) in a 10s Visual Paired Comparison (VPC) test while being eye-tracked. Furthermore, they were asked to point to the previous object. Based on EST, we hypothesized that objects inserted at EB would be processed more fully, resulting in improved memory compared to objects inserted between EB. Only infants with objects at EB exhibited memory evidenced by a transient familiarity preference for the first 3s of the test. Only 18 infants completed the pointing test, but all infants presented with objects at EB (10/10) pointed to the correct (familiar) object, which was not the case for the infants presented with objects between EB (5/8).