Concept: Moral psychology
BACKGROUND: Due to the important role of depression in major illnesses, screening measures for depression are commonly used in medical research. The protocol for managing participants with positive screens is unclear and raises ethical concerns. The aim of this article is to identify and critically discuss the ethical issues that arise when a positive screen for depression is detected, and offer some guidance on managing these issues. DISCUSSION: Deciding on whether to report positive screens to healthcare practitioners is both an ethical and a pragmatic dilemma. Evidence suggests that reporting positive depression screens should only be considered in the context of collaborative care. Possible adverse effects, such as the impact of false-positive results, potentially inappropriate labelling, and potentially inappropriate treatment also need to be considered. If possible, the psychometric properties of the selected screening measure should be determined in the target population, and a threshold for depression that minimises the rate of false-positive results should be chosen. It should be clearly communicated to practitioners that screening scores are not diagnostic for depression, and they should be informed about the diagnostic accuracy of the measure. Research participants need to be made aware of the consequences of the detection of high scores on screening measures, and to be fully informed about the implications of the research protocol. SUMMARY: Further research is needed and the experiences of researchers, participants, and practitioners need to be collated before the value of reporting positive screens for depression can be ascertained. In developing research protocols, the ethical challenges highlighted should be considered. Participants must be agreeable to the agreed protocol and efforts should be made to minimise potentially adverse effects.
Are people more moral in the morning than in the afternoon? We propose that the normal, unremarkable experiences associated with everyday living can deplete one’s capacity to resist moral temptations. In a series of four experiments, both undergraduate students and a sample of U.S. adults engaged in less unethical behavior (e.g., less lying and cheating) on tasks performed in the morning than on the same tasks performed in the afternoon. This morning morality effect was mediated by decreases in moral awareness and self-control in the afternoon. Furthermore, the effect of time of day on unethical behavior was found to be stronger for people with a lower propensity to morally disengage. These findings highlight a simple yet pervasive factor (i.e., the time of day) that has important implications for moral behavior.
Our moral motivations might include a drive towards maximizing overall welfare, consistent with an ethical theory called “utilitarianism.” However, people show non-utilitarian judgments in domains as diverse as healthcare decisions, income distributions, and penal laws. Rather than these being deviations from a fundamentally utilitarian psychology, we suggest that our moral judgments are generally non-utilitarian, even for cases that are typically seen as prototypically utilitarian. We show two separate deviations from utilitarianism in such cases: people do not think maximizing welfare is required (they think it is merely acceptable, in some circumstances), and people do not think that equal welfare tradeoffs are even acceptable. We end by discussing how utilitarian reasoning might play a restricted role within a non-utilitarian moral psychology.
As with other cognitive faculties, the etiology of moral judgment and its connection to early development is complex. Because research is limited, the causative and contributory factors to the development of moral judgment in preverbal infants are unclear. However, evidence is emerging from studies within both infant research and moral psychology that may contribute to our understanding of the early development of moral judgments. Though its finding are preliminary, this proposed paradigm synthesizes these findings to generate an overarching, model of the process that appears to contribute to the development of moral judgment in the first year of life. I will propose that through early interactions with the caregiver, the child acquires an internal representation of a system of rules that determine how right/wrong judgments are to be construed, used, and understood. By breaking moral situations down into their defining features, the attachment model of moral judgment outlines a framework for a universal moral faculty based on a universal, innate, deep structure that appears uniformly in the structure of almost all moral judgments regardless of their content. The implications of the model for our understanding of innateness, universal morality, and the representations of moral situations are discussed.
Although expectations for appropriate animal care are present in most developed countries, significant animal welfare challenges continue to be seen on a regular basis in all areas of veterinary practice. Veterinary ethics is a relatively new area of educational focus but is thought to be critically important in helping veterinarians formulate their approach to clinical case management and in determining the overall acceptability of practices towards animals. An overview is provided of how veterinary ethics are taught and how common ethical frameworks and approaches are employed-along with legislation, guidelines and codes of professional conduct-to address animal welfare issues. Insufficiently mature ethical reasoning or a lack of veterinary ethical sensitivity can lead to an inability or difficulty in speaking up about concerns with clients and ultimately, failure in their duty of care to animals, leading to poor animal welfare outcomes. A number of examples are provided to illustrate this point. Ensuring that robust ethical frameworks are employed will ultimately help veterinarians to “speak up” to address animal welfare concerns and prevent future harms.
Mobile phone coverage has grown, particularly within low- and middle-income countries (LMICs), presenting an opportunity to augment routine health surveillance programs. Several LMICs and global health partners are seeking opportunities to launch basic mobile phone-based surveys of noncommunicable diseases (NCDs). The increasing use of such technology in LMICs brings forth a cluster of ethical challenges; however, much of the existing literature regarding the ethics of mobile or digital health focuses on the use of technologies in high-income countries and does not consider directly the specific ethical issues associated with the conduct of mobile phone surveys (MPS) for NCD risk factor surveillance in LMICs. In this paper, we explore conceptually several of the central ethics issues in this domain, which mainly track the three phases of the MPS process: predata collection, during data collection, and postdata collection. These include identifying the nature of the activity; stakeholder engagement; appropriate design; anticipating and managing potential harms and benefits; consent; reaching intended respondents; data ownership, access and use; and ensuring LMIC sustainability. We call for future work to develop an ethics framework and guidance for the use of mobile phones for disease surveillance globally.
Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.
A central tenet of contemporary moral psychology is that people typically reject active forms of utilitarian sacrifice. Yet, evidence for secularization and declining empathic concern in recent decades suggests the possibility of systematic change in this attitude. In the present study, we employ hypothetical dilemmas to investigate whether judgments of utilitarian sacrifice are becoming more permissive over time. In a cross-sectional design, age negatively predicted utilitarian moral judgment (Study 1). To examine whether this pattern reflected processes of maturation, we asked a panel to re-evaluate several moral dilemmas after an eight-year interval but observed no overall change (Study 2). In contrast, a more recent age-matched sample revealed greater endorsement of utilitarian sacrifice in a time-lag design (Study 3). Taken together, these results suggest that today’s younger cohorts increasingly endorse a utilitarian resolution of sacrificial moral dilemmas.
To facilitate informed consent, consent forms should use language below the grade eight level. Research Ethics Boards (REBs) provide consent form templates to facilitate this goal. Templates with inappropriate language could promote consent forms that participants find difficult to understand. However, a linguistic analysis of templates is lacking.
Implicit moral evaluations-i.e., immediate, unintentional assessments of the wrongness of actions or persons-play a central role in supporting moral behavior in everyday life. Yet little research has employed methods that rigorously measure individual differences in implicit moral evaluations. In five experiments, we develop a new sequential priming measure-the Moral Categorization Task-and a multinomial model that decomposes judgment on this task into multiple component processes. These include implicit moral evaluations of moral transgression primes (Unintentional Judgment), accurate moral judgments about target actions (Intentional Judgment), and a directional tendency to judge actions as morally wrong (Response Bias). Speeded response deadlines reduced Intentional Judgment but not Unintentional Judgment (Experiment 1). Unintentional Judgment was stronger toward moral transgression primes than non-moral negative primes (Experiments 2-4). Intentional Judgment was associated with increased error-related negativity, a neurophysiological indicator of behavioral control (Experiment 4). Finally, people who voted for an anti-gay marriage amendment had stronger Unintentional Judgment toward gay marriage primes (Experiment 5). Across Experiments 1-4, implicit moral evaluations converged with moral personality: Unintentional Judgment about wrong primes, but not negative primes, was negatively associated with psychopathic tendencies and positively associated with moral identity and guilt proneness. Theoretical and practical applications of formal modeling for moral psychology are discussed.