- Proceedings of the National Academy of Sciences of the United States of America
- Published over 5 years ago
Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337:a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others' positive experiences constitutes a positive experience for people.
Conspiracist ideation has been repeatedly implicated in the rejection of scientific propositions, although empirical evidence to date has been sparse. A recent study involving visitors to climate blogs found that conspiracist ideation was associated with the rejection of climate science and the rejection of other scientific propositions such as the link between lung cancer and smoking, and between HIV and AIDS (Lewandowsky et al., in press; LOG12 from here on). This article analyses the response of the climate blogosphere to the publication of LOG12. We identify and trace the hypotheses that emerged in response to LOG12 and that questioned the validity of the paper’s conclusions. Using established criteria to identify conspiracist ideation, we show that many of the hypotheses exhibited conspiratorial content and counterfactual thinking. For example, whereas hypotheses were initially narrowly focused on LOG12, some ultimately grew in scope to include actors beyond the authors of LOG12, such as university executives, a media organization, and the Australian government. The overall pattern of the blogosphere’s response to LOG12 illustrates the possible role of conspiracist ideation in the rejection of science, although alternative scholarly interpretations may be advanced in the future.
- Journal of the Royal Society, Interface / the Royal Society
- Published over 6 years ago
The study of social identity and crowd psychology looks at how and why individual people change their behaviour in response to others. Within a group, a new behaviour can emerge first in a few individuals before it spreads rapidly to all other members. A number of mathematical models have been hypothesized to describe these social contagion phenomena, but these models remain largely untested against empirical data. We used Bayesian model selection to test between various hypotheses about the spread of a simple social behaviour, applause after an academic presentation. Individuals' probability of starting clapping increased in proportion to the number of other audience members already ‘infected’ by this social contagion, regardless of their spatial proximity. The cessation of applause is similarly socially mediated, but is to a lesser degree controlled by the reluctance of individuals to clap too many times. We also found consistent differences between individuals in their willingness to start and stop clapping. The social contagion model arising from our analysis predicts that the time the audience spends clapping can vary considerably, even in the absence of any differences in the quality of the presentations they have heard.
Willingness to lay down one’s life for a group of non-kin, well documented historically and ethnographically, represents an evolutionary puzzle. Building on research in social psychology, we develop a mathematical model showing how conditioning cooperation on previous shared experience can allow individually costly pro-group behavior to evolve. The model generates a series of predictions that we then test empirically in a range of special sample populations (including military veterans, college fraternity/sorority members, football fans, martial arts practitioners, and twins). Our empirical results show that sharing painful experiences produces “identity fusion” - a visceral sense of oneness - which in turn can motivate self-sacrifice, including willingness to fight and die for the group. Practically, our account of how shared dysphoric experiences produce identity fusion helps us better understand such pressing social issues as suicide terrorism, holy wars, sectarian violence, gang-related violence, and other forms of intergroup conflict.
For infants, the first problem in learning a word is to map the word to its referent; a second problem is to remember that mapping when the word and/or referent are again encountered. Recent infant studies suggest that spatial location plays a key role in how infants solve both problems. Here we provide a new theoretical model and new empirical evidence on how the body - and its momentary posture - may be central to these processes. The present study uses a name-object mapping task in which names are either encountered in the absence of their target (experiments 1-3, 6 & 7), or when their target is present but in a location previously associated with a foil (experiments 4, 5, 8 & 9). A humanoid robot model (experiments 1-5) is used to instantiate and test the hypothesis that body-centric spatial location, and thus the bodies' momentary posture, is used to centrally bind the multimodal features of heard names and visual objects. The robot model is shown to replicate existing infant data and then to generate novel predictions, which are tested in new infant studies (experiments 6-9). Despite spatial location being task-irrelevant in this second set of experiments, infants use body-centric spatial contingency over temporal contingency to map the name to object. Both infants and the robot remember the name-object mapping even in new spatial locations. However, the robot model shows how this memory can emerge -not from separating bodily information from the word-object mapping as proposed in previous models of the role of space in word-object mapping - but through the body’s momentary disposition in space.
Increasing an individual’s awareness and understanding of their dietary habits and reasons for eating may help facilitate positive dietary changes. Mobile technologies allow individuals to record diet-related behavior in real time from any location; however, the most popular software applications lack empirical evidence supporting their efficacy as health promotion tools.
Interventions of central, top-down planning are serious limitations to the possibility of modelling the dynamics of cities. An example is the city of Paris (France), which during the 19th century experienced large modifications supervised by a central authority, the ‘Haussmann period’. In this article, we report an empirical analysis of more than 200 years (1789-2010) of the evolution of the street network of Paris. We show that the usual network measures display a smooth behavior and that the most important quantitative signatures of central planning is the spatial reorganization of centrality and the modification of the block shape distribution. Such effects can only be obtained by structural modifications at a large-scale level, with the creation of new roads not constrained by the existing geometry. The evolution of a city thus seems to result from the superimposition of continuous, local growth processes and punctual changes operating at large spatial scales.
BACKGROUND: Threatening health messages that focus on severity are popular, but frequently have no effect or even a counterproductive effect on behavior change. This paradox (i.e. wide application despite low effectiveness) may be partly explained by the intuitive appeal of threatening communication: it may be hard to predict the defensive reactions occurring in response to fear appeals. We examine this hypothesis by using two studies by Brown and colleagues, which provide evidence that threatening health messages in the form of distressing imagery in anti-smoking and anti-alcohol campaigns cause defensive reactions. METHODS: We simulated both Brown et al. experiments, asking participants to estimate the reactions of the original study subjects to the threatening health information (n = 93). Afterwards, we presented the actual original study outcomes. One week later, we assessed whether this knowledge of the actual study outcomes helped participants to more successfully estimate the effectiveness of the threatening health information (n = 72). RESULTS: Results showed that participants were initially convinced of the effectiveness of threatening health messages and were unable to anticipate the defensive reactions that in fact occurred. Furthermore, these estimates did not improve after participants had been explained the dynamics of threatening communication as well as what the effects of the threatening communication had been in reality. CONCLUSIONS: These findings are consistent with the hypothesis that the effectiveness of threatening health messages is intuitively appealing. What is more, providing empirical evidence against the use of threatening health messages has very little effect on this intuitive appeal.
Personal health records (PHRs) have emerged as an important tool with which patients can electronically communicate with their doctors and doctor’s offices. However, there is a lack of theoretical and empirical research on how patients perceive the PHR and the differences in perceptions between users and non-users of the PHR.
Improvements in software and design and reduction in cost have made virtual reality (VR) a practical tool for immersive, three-dimensional (3D), multisensory experiences that distract patients from painful stimuli.