The aerial view of the concept of data sharing is beautiful. What could be better than having high-quality information carefully reexamined for the possibility that new nuggets of useful data are lying there, previously unseen? The potential for leveraging existing results for even more benefit pays appropriate increased tribute to the patients who put themselves at risk to generate the data. The moral imperative to honor their collective sacrifice is the trump card that takes this trick. However, many of us who have actually conducted clinical research, managed clinical studies and data collection and analysis, and curated data sets have . . .
Libertarians are an increasingly prominent ideological group in U.S. politics, yet they have been largely unstudied. Across 16 measures in a large web-based sample that included 11,994 self-identified libertarians, we sought to understand the moral and psychological characteristics of self-described libertarians. Based on an intuitionist view of moral judgment, we focused on the underlying affective and cognitive dispositions that accompany this unique worldview. Compared to self-identified liberals and conservatives, libertarians showed 1) stronger endorsement of individual liberty as their foremost guiding principle, and weaker endorsement of all other moral principles; 2) a relatively cerebral as opposed to emotional cognitive style; and 3) lower interdependence and social relatedness. As predicted by intuitionist theories concerning the origins of moral reasoning, libertarian values showed convergent relationships with libertarian emotional dispositions and social preferences. Our findings add to a growing recognition of the role of personality differences in the organization of political attitudes.
Guilt is an important social and moral emotion. In addition to feeling unpleasant, guilt is metaphorically described as a “weight on one’s conscience.” Evidence from the field of embodied cognition suggests that abstract metaphors may be grounded in bodily experiences, but no prior research has examined the embodiment of guilt. Across four studies we examine whether i) unethical acts increase subjective experiences of weight, ii) feelings of guilt explain this effect, and iii) whether there are consequences of the weight of guilt. Studies 1-3 demonstrated that unethical acts led to more subjective body weight compared to control conditions. Studies 2 and 3 indicated that heightened feelings of guilt mediated the effect, whereas other negative emotions did not. Study 4 demonstrated a perceptual consequence. Specifically, an induction of guilt affected the perceived effort necessary to complete tasks that were physical in nature, compared to minimally physical tasks.
Every day, thousands of polls, surveys, and rating scales are employed to elicit the attitudes of humankind. Given the ubiquitous use of these instruments, it seems we ought to have firm answers to what is measured by them, but unfortunately we do not. To help remedy this situation, we present a novel approach to investigate the nature of attitudes. We created a self-transforming paper survey of moral opinions, covering both foundational principles, and current dilemmas hotly debated in the media. This survey used a magic trick to expose participants to a reversal of their previously stated attitudes, allowing us to record whether they were prepared to endorse and argue for the opposite view of what they had stated only moments ago. The result showed that the majority of the reversals remained undetected, and a full 69% of the participants failed to detect at least one of two changes. In addition, participants often constructed coherent and unequivocal arguments supporting the opposite of their original position. These results suggest a dramatic potential for flexibility in our moral attitudes, and indicates a clear role for self-attribution and post-hoc rationalization in attitude formation and change.
- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 3 years ago
Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.
An aversion to harming others is a core component of human morality and is disturbed in antisocial behavior [1-4]. Deficient harm aversion may underlie instrumental and reactive aggression, which both feature in psychopathy . Past work has highlighted monoaminergic influences on aggression [6-11], but a mechanistic account of how monoamines regulate antisocial motives remains elusive. We previously observed that most people show a greater aversion to inflicting pain on others than themselves . Here, we investigated whether this hyperaltruistic disposition is susceptible to monoaminergic control. We observed dissociable effects of the serotonin reuptake inhibitor citalopram and the dopamine precursor levodopa on decisions to inflict pain on oneself and others for financial gain. Computational models of choice behavior showed that citalopram increased harm aversion for both self and others, while levodopa reduced hyperaltruism. The effects of citalopram were stronger than those of levodopa. Crucially, neither drug influenced the physical perception of pain or other components of choice such as motor impulsivity or loss aversion [13, 14], suggesting a direct and specific influence of serotonin and dopamine on the valuation of harm. We also found evidence for dose dependency of these effects. Finally, the drugs had dissociable effects on response times, with citalopram enhancing behavioral inhibition and levodopa reducing slowing related to being responsible for another’s fate. These distinct roles of serotonin and dopamine in modulating moral behavior have implications for potential treatments of social dysfunction that is a common feature as well as a risk factor for many psychiatric disorders.
The punishment of social misconduct is a powerful mechanism for stabilizing high levels of cooperation among unrelated individuals. It is regularly assumed that humans have a universal disposition to punish social norm violators, which is sometimes labelled “universal structure of human morality” or “pure aversion to social betrayal”. Here we present evidence that, contrary to this hypothesis, the propensity to punish a moral norm violator varies among participants with different career trajectories. In anonymous real-life conditions, future teachers punished a talented but immoral young violinist: they voted against her in an important music competition when they had been informed of her previous blatant misconduct toward fellow violin students. In contrast, future police officers and high school students did not punish. This variation among socio-professional categories indicates that the punishment of norm violators is not entirely explained by an aversion to social betrayal. We suggest that context specificity plays an important role in normative behaviour; people seem inclined to enforce social norms only in situations that are familiar, relevant for their social category, and possibly strategically advantageous.
Over the past decade, intuitionist models of morality have challenged the view that moral reasoning is the sole or even primary means by which moral judgments are made. Rather, intuitionist models posit that certain situations automatically elicit moral intuitions, which guide moral judgments. We present three experiments showing that evaluations are also susceptible to the influence of moral versus non-moral construal. We had participants make moral evaluations (rating whether actions were morally good or bad) or non-moral evaluations (rating whether actions were pragmatically or hedonically good or bad) of a wide variety of actions. As predicted, moral evaluations were faster, more extreme, and more strongly associated with universal prescriptions-the belief that absolutely nobody or everybody should engage in an action-than non-moral (pragmatic or hedonic) evaluations of the same actions. Further, we show that people are capable of flexibly shifting from moral to non-moral evaluations on a trial-by-trial basis. Taken together, these experiments provide evidence that moral versus non-moral construal has an important influence on evaluation and suggests that effects of construal are highly flexible. We discuss the implications of these experiments for models of moral judgment and decision-making.
BACKGROUND: Due to the important role of depression in major illnesses, screening measures for depression are commonly used in medical research. The protocol for managing participants with positive screens is unclear and raises ethical concerns. The aim of this article is to identify and critically discuss the ethical issues that arise when a positive screen for depression is detected, and offer some guidance on managing these issues. DISCUSSION: Deciding on whether to report positive screens to healthcare practitioners is both an ethical and a pragmatic dilemma. Evidence suggests that reporting positive depression screens should only be considered in the context of collaborative care. Possible adverse effects, such as the impact of false-positive results, potentially inappropriate labelling, and potentially inappropriate treatment also need to be considered. If possible, the psychometric properties of the selected screening measure should be determined in the target population, and a threshold for depression that minimises the rate of false-positive results should be chosen. It should be clearly communicated to practitioners that screening scores are not diagnostic for depression, and they should be informed about the diagnostic accuracy of the measure. Research participants need to be made aware of the consequences of the detection of high scores on screening measures, and to be fully informed about the implications of the research protocol. SUMMARY: Further research is needed and the experiences of researchers, participants, and practitioners need to be collated before the value of reporting positive screens for depression can be ascertained. In developing research protocols, the ethical challenges highlighted should be considered. Participants must be agreeable to the agreed protocol and efforts should be made to minimise potentially adverse effects.
BACKGROUND: Previous work has noted that science stands as an ideological force insofar as the answers it offers to a variety of fundamental questions and concerns; as such, those who pursue scientific inquiry have been shown to be concerned with the moral and social ramifications of their scientific endeavors. No studies to date have directly investigated the links between exposure to science and moral or prosocial behaviors. METHODOLOGYPRINCIPAL FINDINGS: Across four studies, both naturalistic measures of science exposure and experimental primes of science led to increased adherence to moral norms and more morally normative behaviors across domains. Study 1 (n = 36) tested the natural correlation between exposure to science and likelihood of enforcing moral norms. Studies 2 (n = 49), 3 (n = 52), and 4 (n = 43) manipulated thoughts about science and examined the causal impact of such thoughts on imagined and actual moral behavior. Across studies, thinking about science had a moralizing effect on a broad array of domains, including interpersonal violations (Studies 1, 2), prosocial intentions (Study 3), and economic exploitation (Study 4). CONCLUSIONSSIGNIFICANCE: These studies demonstrated the morally normative effects of lay notions of science. Thinking about science leads individuals to endorse more stringent moral norms and exhibit more morally normative behavior. These studies are the first of their kind to systematically and empirically test the relationship between science and morality. The present findings speak to this question and elucidate the value-laden outcomes of the notion of science.