People often discount evidence that contradicts their firmly held beliefs. However, little is known about the neural mechanisms that govern this behavior. We used neuroimaging to investigate the neural systems involved in maintaining belief in the face of counterevidence, presenting 40 liberals with arguments that contradicted their strongly held political and non-political views. Challenges to political beliefs produced increased activity in the default mode network-a set of interconnected structures associated with self-representation and disengagement from the external world. Trials with greater belief resistance showed increased response in the dorsomedial prefrontal cortex and decreased activity in the orbitofrontal cortex. We also found that participants who changed their minds more showed less BOLD signal in the insula and the amygdala when evaluating counterevidence. These results highlight the role of emotion in belief-change resistance and offer insight into the neural systems involved in belief maintenance, motivated reasoning, and related phenomena.
Proponents of Neuro-Linguistic Programming (NLP) claim that certain eye-movements are reliable indicators of lying. According to this notion, a person looking up to their right suggests a lie whereas looking up to their left is indicative of truth telling. Despite widespread belief in this claim, no previous research has examined its validity. In Study 1 the eye movements of participants who were lying or telling the truth were coded, but did not match the NLP patterning. In Study 2 one group of participants were told about the NLP eye-movement hypothesis whilst a second control group were not. Both groups then undertook a lie detection test. No significant differences emerged between the two groups. Study 3 involved coding the eye movements of both liars and truth tellers taking part in high profile press conferences. Once again, no significant differences were discovered. Taken together the results of the three studies fail to support the claims of NLP. The theoretical and practical implications of these findings are discussed.
Though religion has been shown to have generally positive effects on normative ‘prosocial’ behavior, recent laboratory research suggests that these effects may be driven primarily by supernatural punishment. Supernatural benevolence, on the other hand, may actually be associated with less prosocial behavior. Here, we investigate these effects at the societal level, showing that the proportion of people who believe in hell negatively predicts national crime rates whereas belief in heaven predicts higher crime rates. These effects remain after accounting for a host of covariates, and ultimately prove stronger predictors of national crime rates than economic variables such as GDP and income inequality. Expanding on laboratory research on religious prosociality, this is the first study to tie religious beliefs to large-scale cross-national trends in pro- and anti-social behavior.
Recent research into the psychology of conspiracy belief has highlighted the importance of belief systems in the acceptance or rejection of conspiracy theories. We examined a large sample of conspiracist (pro-conspiracy-theory) and conventionalist (anti-conspiracy-theory) comments on news websites in order to investigate the relative importance of promoting alternative explanations vs. rejecting conventional explanations for events. In accordance with our hypotheses, we found that conspiracist commenters were more likely to argue against the opposing interpretation and less likely to argue in favor of their own interpretation, while the opposite was true of conventionalist commenters. However, conspiracist comments were more likely to explicitly put forward an account than conventionalist comments were. In addition, conspiracists were more likely to express mistrust and made more positive and fewer negative references to other conspiracy theories. The data also indicate that conspiracists were largely unwilling to apply the “conspiracy theory” label to their own beliefs and objected when others did so, lending support to the long-held suggestion that conspiracy belief carries a social stigma. Finally, conventionalist arguments tended to have a more hostile tone. These tendencies in persuasive communication can be understood as a reflection of an underlying conspiracist worldview in which the details of individual conspiracy theories are less important than a generalized rejection of official explanations.
Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation.
Widespread misperceptions undermine citizens' decision-making ability. Conclusions based on falsehoods and conspiracy theories are by definition flawed. This article demonstrates that individuals' epistemic beliefs-beliefs about the nature of knowledge and how one comes to know-have important implications for perception accuracy. The present study uses a series of large, nationally representative surveys of the U.S. population to produce valid and reliable measures of three aspects of epistemic beliefs: reliance on intuition for factual beliefs (Faith in Intuition for facts), importance of consistency between empirical evidence and beliefs (Need for evidence), and conviction that “facts” are politically constructed (Truth is political). Analyses confirm that these factors complement established predictors of misperception, substantively increasing our ability to explain both individuals' propensity to engage in conspiracist ideation, and their willingness to embrace falsehoods about high-profile scientific and political issues. Individuals who view reality as a political construct are significantly more likely to embrace falsehoods, whereas those who believe that their conclusions must hew to available evidence tend to hold more accurate beliefs. Confidence in the ability to intuitively recognize truth is a uniquely important predictor of conspiracist ideation. Results suggest that efforts to counter misperceptions may be helped by promoting epistemic beliefs emphasizing the importance of evidence, cautious use of feelings, and trust that rigorous assessment by knowledgeable specialists is an effective guard against political manipulation.
Cognitive theories on deception posit that lying requires more cognitive resources than telling the truth. In line with this idea, it has been demonstrated that deceptive responses are typically associated with increased response times and higher error rates compared to truthful responses. Although the cognitive cost of lying has been assumed to be resistant to practice, it has recently been shown that people who are trained to lie can reduce this cost. In the present study (n = 42), we further explored the effects of practice on one’s ability to lie by manipulating the proportions of lie and truth-trials in a Sheffield lie test across three phases: Baseline (50% lie, 50% truth), Training (frequent-lie group: 75% lie, 25% truth; control group: 50% lie, 50% truth; and frequent-truth group: 25% lie, 75% truth), and Test (50% lie, 50% truth). The results showed that lying became easier while participants were trained to lie more often and that lying became more difficult while participants were trained to tell the truth more often. Furthermore, these effects did carry over to the test phase, but only for the specific items that were used for the training manipulation. Hence, our study confirms that relatively little practice is enough to alter the cognitive cost of lying, although this effect does not persist over time for non-practiced items.
Science is facing a “replication crisis” in which many experimental findings cannot be replicated and are likely to be false. Does this imply that many scientific facts are false as well? To find out, we explore the process by which a claim becomes fact. We model the community’s confidence in a claim as a Markov process with successive published results shifting the degree of belief. Publication bias in favor of positive findings influences the distribution of published results. We find that unless a sufficient fraction of negative results are published, false claims frequently can become canonized as fact. Data-dredging, p-hacking, and similar behaviors exacerbate the problem. Should negative results become easier to publish as a claim approaches acceptance as a fact, however, true and false claims would be more readily distinguished. To the degree that the model reflects the real world, there may be serious concerns about the validity of purported facts in some disciplines.
A School-Based Intervention to Increase Lyme Disease Preventive Measures Among Elementary School-Aged Children
- Vector borne and zoonotic diseases (Larchmont, N.Y.)
- Published over 3 years ago
Educational interventions to reduce Lyme disease (LD) among at-risk school children have had little study. The purpose of this study was to evaluate whether a short in-class LD education program based on social learning theory and the Health Belief Model (HBM) impacted a child’s knowledge, attitude, and preventive behavior.
Periodization theory has, over the past seven decades, emerged as the preeminent training planning paradigm. The philosophical underpinnings of periodization theory can be traced back to the integration of diverse shaping influences, whereby coaching beliefs and traditions were blended with historically available scientific insights and contextualized against pervading social planning models. Since then, many dimensions of elite preparation have evolved significantly, as driven by a combination of coaching innovations and science-led advances in training theory, techniques, and technologies. These advances have been incorporated into the fabric of the pre-existing periodization planning framework, yet the philosophical assumptions underpinning periodization remain largely unchallenged and unchanged. One particularly influential academic sphere of study, the science of stress, particularly the work of Hans Selye, is repeatedly cited by theorists as a central pillar upon which periodization theory is founded. A fundamental assumption emanating from the early stress research is that physical stress is primarily a biologically mediated phenomenon: a presumption translated to athletic performance contexts as evidence that mechanical training stress directly regulates the magnitude of subsequent ‘fitness’ adaptations. Interestingly, however, since periodization theory first emerged, the science of stress has evolved extensively from its historical roots. This raises a fundamental question: if the original scientific platform upon which periodization theory was founded has disintegrated, should we critically re-evaluate conventional perspectives through an updated conceptual lens? Realigning periodization philosophy with contemporary stress theory thus presents us with an opportunity to recalibrate training planning models with both contemporary scientific insight and progressive coaching practice.