- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 4 years ago
Selecting appropriate foods is a complex and evolutionarily ancient problem, yet past studies have revealed little evidence of adaptations present in infancy that support sophisticated reasoning about perceptual properties of food. We propose that humans have an early-emerging system for reasoning about the social nature of food selection. Specifically, infants' reasoning about food choice is tied to their thinking about agents' intentions and social relationships. Whereas infants do not expect people to like the same objects, infants view food preferences as meaningfully shared across individuals. Infants' reasoning about food preferences is fundamentally social: They generalize food preferences across individuals who affiliate, or who speak a common language, but not across individuals who socially disengage or who speak different languages. Importantly, infants' reasoning about food preferences is flexibly calibrated to their own experiences: Tests of bilingual babies reveal that an infant’s sociolinguistic background influences whether she will constrain her generalization of food preferences to people who speak the same language. Additionally, infants' systems for reasoning about food is differentially responsive to positive and negative information. Infants generalize information about food disgust across all people, regardless of those people’s social identities. Thus, whereas food preferences are seen as embedded within social groups, disgust is interpreted as socially universal, which could help infants avoid potentially dangerous foods. These studies reveal an early-emerging system for thinking about food that incorporates social reasoning about agents and their relationships, and allows infants to make abstract, flexible, adaptive inferences to interpret others' food choices.
Are we able to infer what happened to a person from a brief sample of his/her behaviour? It has been proposed that mentalising skills can be used to retrodict as well as predict behaviour, that is, to determine what mental states of a target have already occurred. The current study aimed to develop a paradigm to explore these processes, which takes into account the intricacies of real-life situations in which reasoning about mental states, as embodied in behaviour, may be utilised. A novel task was devised which involved observing subtle and naturalistic reactions of others in order to determine the event that had previously taken place. Thirty-five participants viewed videos of real individuals reacting to the researcher behaving in one of four possible ways, and were asked to judge which of the four ‘scenarios’ they thought the individual was responding to. Their eye movements were recorded to establish the visual strategies used. Participants were able to deduce successfully from a small sample of behaviour which scenario had previously occurred. Surprisingly, looking at the eye region was associated with poorer identification of the scenarios, and eye movement strategy varied depending on the event experienced by the person in the video. This suggests people flexibly deploy their attention using a retrodictive mindreading process to infer events.
Gilbert et al. conclude that evidence from the Open Science Collaboration’s Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.
Transformative applications in biomedicine require the discovery of complex regulatory networks that explain the development and regeneration of anatomical structures, and reveal what external signals will trigger desired changes of large-scale pattern. Despite recent advances in bioinformatics, extracting mechanistic pathway models from experimental morphological data is a key open challenge that has resisted automation. The fundamental difficulty of manually predicting emergent behavior of even simple networks has limited the models invented by human scientists to pathway diagrams that show necessary subunit interactions but do not reveal the dynamics that are sufficient for complex, self-regulating pattern to emerge. To finally bridge the gap between high-resolution genetic data and the ability to understand and control patterning, it is critical to develop computational tools to efficiently extract regulatory pathways from the resultant experimental shape phenotypes. For example, planarian regeneration has been studied for over a century, but despite increasing insight into the pathways that control its stem cells, no constructive, mechanistic model has yet been found by human scientists that explains more than one or two key features of its remarkable ability to regenerate its correct anatomical pattern after drastic perturbations. We present a method to infer the molecular products, topology, and spatial and temporal non-linear dynamics of regulatory networks recapitulating in silico the rich dataset of morphological phenotypes resulting from genetic, surgical, and pharmacological experiments. We demonstrated our approach by inferring complete regulatory networks explaining the outcomes of the main functional regeneration experiments in the planarian literature; By analyzing all the datasets together, our system inferred the first systems-biology comprehensive dynamical model explaining patterning in planarian regeneration. This method provides an automated, highly generalizable framework for identifying the underlying control mechanisms responsible for the dynamic regulation of growth and form.
It is tempting to treat frequency trends from the Google Books data sets as indicators of the “true” popularity of various words and phrases. Doing so allows us to draw quantitatively strong conclusions about the evolution of cultural perception of a given topic, such as time or gender. However, the Google Books corpus suffers from a number of limitations which make it an obscure mask of cultural popularity. A primary issue is that the corpus is in effect a library, containing one of each book. A single, prolific author is thereby able to noticeably insert new phrases into the Google Books lexicon, whether the author is widely read or not. With this understood, the Google Books corpus remains an important data set to be considered more lexicon-like than text-like. Here, we show that a distinct problematic feature arises from the inclusion of scientific texts, which have become an increasingly substantive portion of the corpus throughout the 1900s. The result is a surge of phrases typical to academic articles but less common in general, such as references to time in the form of citations. We use information theoretic methods to highlight these dynamics by examining and comparing major contributions via a divergence measure of English data sets between decades in the period 1800-2000. We find that only the English Fiction data set from the second version of the corpus is not heavily affected by professional texts. Overall, our findings call into question the vast majority of existing claims drawn from the Google Books corpus, and point to the need to fully characterize the dynamics of the corpus before using these data sets to draw broad conclusions about cultural and linguistic evolution.
Scientific interest in the cognitive underpinnings of religious belief has grown in recent years. However, to date, little experimental research has focused on the cognitive processes that may promote religious disbelief. The present studies apply a dual-process model of cognitive processing to this problem, testing the hypothesis that analytic processing promotes religious disbelief. Individual differences in the tendency to analytically override initially flawed intuitions in reasoning were associated with increased religious disbelief. Four additional experiments provided evidence of causation, as subtle manipulations known to trigger analytic processing also encouraged religious disbelief. Combined, these studies indicate that analytic processing is one factor (presumably among several) that promotes religious disbelief. Although these findings do not speak directly to conversations about the inherent rationality, value, or truth of religious beliefs, they illuminate one cognitive factor that may influence such discussions.
In the present work, we investigated the pop cultural idea that people have a sixth sense, called “gaydar,” to detect who is gay. We propose that “gaydar” is an alternate label for using stereotypes to infer orientation (e.g., inferring that fashionable men are gay). Another account, however, argues that people possess a facial perception process that enables them to identify sexual orientation from facial structure. We report five experiments testing these accounts. Participants made gay-or-straight judgments about fictional targets that were constructed using experimentally manipulated stereotypic cues and real gay/straight people’s face cues. These studies revealed that orientation is not visible from the face-purportedly “face-based” gaydar arises from a third-variable confound. People do, however, readily infer orientation from stereotypic attributes (e.g., fashion, career). Furthermore, the folk concept of gaydar serves as a legitimizing myth: Compared to a control group, people stereotyped more often when led to believe in gaydar, whereas people stereotyped less when told gaydar is an alternate label for stereotyping. Discussion focuses on the implications of the gaydar myth and why, contrary to some prior claims, stereotyping is highly unlikely to result in accurate judgments about orientation.
Cooperation is central to human social behaviour. However, choosing to cooperate requires individuals to incur a personal cost to benefit others. Here we explore the cognitive basis of cooperative decision-making in humans using a dual-process framework. We ask whether people are predisposed towards selfishness, behaving cooperatively only through active self-control; or whether they are intuitively cooperative, with reflection and prospective reasoning favouring ‘rational’ self-interest. To investigate this issue, we perform ten studies using economic games. We find that across a range of experimental designs, subjects who reach their decisions more quickly are more cooperative. Furthermore, forcing subjects to decide quickly increases contributions, whereas instructing them to reflect and forcing them to decide slowly decreases contributions. Finally, an induction that primes subjects to trust their intuitions increases contributions compared with an induction that promotes greater reflection. To explain these results, we propose that cooperation is intuitive because cooperative heuristics are developed in daily life where cooperation is typically advantageous. We then validate predictions generated by this proposed mechanism. Our results provide convergent evidence that intuition supports cooperation in social dilemmas, and that reflection can undermine these cooperative impulses.
Background Survivors of critical illness often have a prolonged and disabling form of cognitive impairment that remains inadequately characterized. Methods We enrolled adults with respiratory failure or shock in the medical or surgical intensive care unit (ICU), evaluated them for in-hospital delirium, and assessed global cognition and executive function 3 and 12 months after discharge with the use of the Repeatable Battery for the Assessment of Neuropsychological Status (population age-adjusted mean [±SD] score, 100±15, with lower values indicating worse global cognition) and the Trail Making Test, Part B (population age-, sex-, and education-adjusted mean score, 50±10, with lower scores indicating worse executive function). Associations of the duration of delirium and the use of sedative or analgesic agents with the outcomes were assessed with the use of linear regression, with adjustment for potential confounders. Results Of the 821 patients enrolled, 6% had cognitive impairment at baseline, and delirium developed in 74% during the hospital stay. At 3 months, 40% of the patients had global cognition scores that were 1.5 SD below the population means (similar to scores for patients with moderate traumatic brain injury), and 26% had scores 2 SD below the population means (similar to scores for patients with mild Alzheimer’s disease). Deficits occurred in both older and younger patients and persisted, with 34% and 24% of all patients with assessments at 12 months that were similar to scores for patients with moderate traumatic brain injury and scores for patients with mild Alzheimer’s disease, respectively. A longer duration of delirium was independently associated with worse global cognition at 3 and 12 months (P=0.001 and P=0.04, respectively) and worse executive function at 3 and 12 months (P=0.004 and P=0.007, respectively). Use of sedative or analgesic medications was not consistently associated with cognitive impairment at 3 and 12 months. Conclusions Patients in medical and surgical ICUs are at high risk for long-term cognitive impairment. A longer duration of delirium in the hospital was associated with worse global cognition and executive function scores at 3 and 12 months. (Funded by the National Institutes of Health and others; BRAIN-ICU ClinicalTrials.gov number, NCT00392795 .).
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 3 years ago
The capacity for representing and reasoning over sets of possibilities, or modal cognition, supports diverse kinds of high-level judgments: causal reasoning, moral judgment, language comprehension, and more. Prior research on modal cognition asks how humans explicitly and deliberatively reason about what is possible but has not investigated whether or how people have a default, implicit representation of which events are possible. We present three studies that characterize the role of implicit representations of possibility in cognition. Collectively, these studies differentiate explicit reasoning about possibilities from default implicit representations, demonstrate that human adults often default to treating immoral and irrational events as impossible, and provide a case study of high-level cognitive judgments relying on default implicit representations of possibility rather than explicit deliberation.