Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments. We present three experiments showing that evaluations are also susceptible to the influence of moral versus non-moral construal. We had participants make moral evaluations (rating whether actions were morally good or bad) or non-moral evaluations (rating whether actions were pragmatically or hedonically good or bad) of a wide variety of actions. As predicted, moral evaluations were faster, more extreme, and more strongly associated with universal prescriptions-the belief that absolutely nobody or everybody should engage in an action-than non-moral (pragmatic or hedonic) evaluations of the same actions. Further, we show that people are capable of flexibly shifting from moral to non-moral evaluations on a trial-by-trial basis. Taken together, these experiments provide evidence that moral versus non-moral construal has an important influence on evaluation and suggests that effects of construal are highly flexible. We discuss the implications of these experiments for models of moral judgment and decision-making.
Are people more moral in the morning than in the afternoon? We propose that the normal, unremarkable experiences associated with everyday living can deplete one’s capacity to resist moral temptations. In a series of four experiments, both undergraduate students and a sample of U.S. adults engaged in less unethical behavior (e.g., less lying and cheating) on tasks performed in the morning than on the same tasks performed in the afternoon. This morning morality effect was mediated by decreases in moral awareness and self-control in the afternoon. Furthermore, the effect of time of day on unethical behavior was found to be stronger for people with a lower propensity to morally disengage. These findings highlight a simple yet pervasive factor (i.e., the time of day) that has important implications for moral behavior.
The nature of moral action versus moral judgment has been extensively debated in numerous disciplines. We introduce Virtual Reality (VR) moral paradigms examining the action individuals take in a high emotionally arousing, direct action-focused, moral scenario. In two studies involving qualitatively different populations, we found a greater endorsement of utilitarian responses-killing one in order to save many others-when action was required in moral virtual dilemmas compared to their judgment counterparts. Heart rate in virtual moral dilemmas was significantly increased when compared to both judgment counterparts and control virtual tasks. Our research suggests that moral action may be viewed as an independent construct to moral judgment, with VR methods delivering new prospects for investigating and assessing moral behaviour.
The science of morality has drawn heavily on well-controlled but artificial laboratory settings. To study everyday morality, we repeatedly assessed moral or immoral acts and experiences in a large (N = 1252) sample using ecological momentary assessment. Moral experiences were surprisingly frequent and manifold. Liberals and conservatives emphasized somewhat different moral dimensions. Religious and nonreligious participants did not differ in the likelihood or quality of committed moral and immoral acts. Being the target of moral or immoral deeds had the strongest impact on happiness, whereas committing moral or immoral deeds had the strongest impact on sense of purpose. Analyses of daily dynamics revealed evidence for both moral contagion and moral licensing. In sum, morality science may benefit from a closer look at the antecedents, dynamics, and consequences of everyday moral experience.
In the present article we demonstrate stable individual differences in the extent to which a reliance on logic and evidence in the formation and evaluation of beliefs is perceived as a moral virtue, and a reliance on less rational processes is perceived as a vice. We refer to this individual difference variable as moralized rationality. Eight studies are reported in which an instrument to measure individual differences in moralized rationality is validated. Results show that the Moralized Rationality Scale (MRS) is internally consistent, and captures something distinct from the personal importance people attach to being rational (Studies 1-3). Furthermore, the MRS has high test-retest reliability (Study 4), is conceptually distinct from frequently used measures of individual differences in moral values, and it is negatively related to common beliefs that are not supported by scientific evidence (Study 5). We further demonstrate that the MRS predicts morally laden reactions, such as a desire for punishment, of people who rely on irrational (vs. rational) ways of forming and evaluating beliefs (Studies 6 and 7). Finally, we show that the MRS uniquely predicts motivation to contribute to a charity that works to prevent the spread of irrational beliefs (Study 8). We conclude that (1) there are stable individual differences in the extent to which people moralize a reliance on rationality in the formation and evaluation of beliefs, (2) that these individual differences do not reduce to the personal importance attached to rationality, and (3) that individual differences in moralized rationality have important motivational and interpersonal consequences.
Do people think that scientists are bad people? Although surveys find that science is a highly respected profession, a growing discourse has emerged regarding how science is often judged negatively. We report ten studies (N = 2328) that investigated morality judgments of scientists and compared those with judgments of various control groups, including atheists. A persistent intuitive association between scientists and disturbing immoral conduct emerged for violations of the binding moral foundations, particularly when this pertained to violations of purity. However, there was no association in the context of the individualizing moral foundations related to fairness and care. Other evidence found that scientists were perceived as similar to others in their concerns with the individualizing moral foundations of fairness and care, yet as departing for all of the binding foundations of loyalty, authority, and purity. Furthermore, participants stereotyped scientists particularly as robot-like and lacking emotions, as well as valuing knowledge over morality and being potentially dangerous. The observed intuitive immorality associations are partially due to these explicit stereotypes but do not correlate with any perceived atheism. We conclude that scientists are perceived not as inherently immoral, but as capable of immoral conduct.
Why do people judge hypocrites, who condemn immoral behaviors that they in fact engage in, so negatively? We propose that hypocrites are disliked because their condemnation sends a false signal about their personal conduct, deceptively suggesting that they behave morally. We show that verbal condemnation signals moral goodness (Study 1) and does so even more convincingly than directly stating that one behaves morally (Study 2). We then demonstrate that people judge hypocrites negatively-even more negatively than people who directly make false statements about their morality (Study 3). Finally, we show that “honest” hypocrites-who avoid false signaling by admitting to committing the condemned transgression-are not perceived negatively even though their actions contradict their stated values (Study 4). Critically, the same is not true of hypocrites who engage in false signaling but admit to unrelated transgressions (Study 5). Together, our results support a false-signaling theory of hypocrisy.
Abstract Several researchers have demonstrated that the virtual behaviors committed in a video game can elicit feelings of guilt. Researchers have proposed that such guilt could have prosocial consequences. However, this proposition has not been supported with empirical evidence. The current study examined this issue in a 2×2 (video game play vs. real world recollection×guilt vs. control) experiment. Participants were first randomly assigned to either play a video game or complete a memory recall task. Next, participants were randomly assigned to either a guilt-inducing condition (game play as a terrorist/recall of acts that induce guilt) or a control condition (game play as a UN soldier/recall of acts that do not induce guilt). Results of the study indicate several important findings. First, the current results replicate previous research indicating that immoral virtual behaviors are capable of eliciting guilt. Second, and more importantly, the guilt elicited by game play led to intuition-specific increases in the salience of violated moral foundations. These findings indicate that committing “immoral” virtual behaviors in a video game can lead to increased moral sensitivity of the player. The potential prosocial benefits of these findings are discussed.
It is often thought that judgments about what we ought to do are limited by judgments about what we can do, or that “ought implies can.” We conducted eight experiments to test the link between a range of moral requirements and abilities in ordinary moral evaluations. Moral obligations were repeatedly attributed in tandem with inability, regardless of the type (Experiments 1-3), temporal duration (Experiment 5), or scope (Experiment 6) of inability. This pattern was consistently observed using a variety of moral vocabulary to probe moral judgments and was insensitive to different levels of seriousness for the consequences of inaction (Experiment 4). Judgments about moral obligation were no different for individuals who can or cannot perform physical actions, and these judgments differed from evaluations of a non-moral obligation (Experiment 7). Together these results demonstrate that commonsense morality rejects the “ought implies can” principle for moral requirements, and that judgments about moral obligation are made independently of considerations about ability. By contrast, judgments of blame were highly sensitive to considerations about ability (Experiment 8), which suggests that commonsense morality might accept a “blame implies can” principle.
The implications of sleep for morality are only starting to be explored. Extending the ethics literature, we contend that because bringing morality to conscious attention requires effort, a lack of sleep leads to low moral awareness. We test this prediction with three studies. A laboratory study with a manipulation of sleep across 90 participants judging a scenario for moral content indicates that a lack of sleep leads to low moral awareness. An archival study of Google Trends data across 6 years highlights a national dip in Web searches for moral topics (but not other topics) on the Monday after the Spring time change, which tends to deprive people of sleep. Finally, a diary study of 127 participants indicates that (within participants) nights with a lack of sleep are associated with low moral awareness the next day. Together, these three studies suggest that a lack of sleep leaves people less morally aware, with important implications for the recognition of morality in others.