- Proceedings of the National Academy of Sciences of the United States of America
- Published almost 4 years ago
Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.
Two rival theories of how humans recognize faces exist: (i) recognition is innate, relying on specialized neocortical circuitry, and (ii) recognition is a learned expertise, relying on general object recognition pathways. Here, we explore whether animals without a neocortex, can learn to recognize human faces. Human facial recognition has previously been demonstrated for birds, however they are now known to possess neocortex-like structures. Also, with much of the work done in domesticated pigeons, one cannot rule out the possibility that they have developed adaptations for human face recognition. Fish do not appear to possess neocortex-like cells, and given their lack of direct exposure to humans, are unlikely to have evolved any specialized capabilities for human facial recognition. Using a two-alternative forced-choice procedure, we show that archerfish (Toxotes chatareus) can learn to discriminate a large number of human face images (Experiment 1, 44 faces), even after controlling for colour, head-shape and brightness (Experiment 2, 18 faces). This study not only demonstrates that archerfish have impressive pattern discrimination abilities, but also provides evidence that a vertebrate lacking a neocortex and without an evolutionary prerogative to discriminate human faces, can nonetheless do so to a high degree of accuracy.
Social insects make elaborate use of simple mechanisms to achieve seemingly complex behavior and may thus provide a unique resource to discover the basic cognitive elements required for culture, i.e., group-specific behaviors that spread from “innovators” to others in the group via social learning. We first explored whether bumblebees can learn a nonnatural object manipulation task by using string pulling to access a reward that was presented out of reach. Only a small minority “innovated” and solved the task spontaneously, but most bees were able to learn to pull a string when trained in a stepwise manner. In addition, naïve bees learnt the task by observing a trained demonstrator from a distance. Learning the behavior relied on a combination of simple associative mechanisms and trial-and-error learning and did not require “insight”: naïve bees failed a “coiled-string experiment,” in which they did not receive instant visual feedback of the target moving closer when tugging on the string. In cultural diffusion experiments, the skill spread rapidly from a single knowledgeable individual to the majority of a colony’s foragers. We observed that there were several sequential sets (“generations”) of learners, so that previously naïve observers could first acquire the technique by interacting with skilled individuals and, subsequently, themselves become demonstrators for the next “generation” of learners, so that the longevity of the skill in the population could outlast the lives of informed foragers. This suggests that, so long as animals have a basic toolkit of associative and motor learning processes, the key ingredients for the cultural spread of unusual skills are already in place and do not require sophisticated cognition.
Motor skill memory is first encoded online in a fragile form during practice and then converted into a stable form by offline consolidation, which is the behavioral stage critical for successful learning. Praise, a social reward, is thought to boost motor skill learning by increasing motivation, which leads to increased practice. However, the effect of praise on consolidation is unknown. Here, we tested the hypothesis that praise following motor training directly facilitates skill consolidation. Forty-eight healthy participants were trained on a sequential finger-tapping task. Immediately after training, participants were divided into three groups according to whether they received praise for their own training performance, praise for another participant’s performance, or no praise. Participants who received praise for their own performance showed a significantly higher rate of offline improvement relative to other participants when performing a surprise recall test of the learned sequence. On the other hand, the average performance of the novel sequence and randomly-ordered tapping did not differ between the three experimental groups. These results are the first to indicate that praise-related improvements in motor skill memory are not due to a feedback-incentive mechanism, but instead involve direct effects on the offline consolidation process.
Misalignments between endogenous circadian rhythms and the built environment (i.e., social jet lag, SJL) result in learning and attention deficits. Currently, there is no way to assess the impact of SJL on learning outcomes of large populations as a response to schedule choices, let alone to assess which individuals are most negatively impacted by these choices. We analyzed two years of learning management system login events for 14,894 Northeastern Illinois University (NEIU) students to investigate the capacity of such systems as tools for mapping the impact of SJL over large populations while maintaining the ability to generate insights about individuals. Personal daily activity profiles were validated against known biological timing effects, and revealed a majority of students experience more than 30 minutes of SJL on average, with greater amplitude correlating strongly with a significant decrease in academic performance, especially in people with later apparent chronotypes. Our findings demonstrate that online records can be used to map individual- and population-level SJL, allow deep mining for patterns across demographics, and could guide schedule choices in an effort to minimize SJL’s negative impact on learning outcomes.
Precision medicine approaches rely on obtaining precise knowledge of the true state of health of an individual patient, which results from a combination of their genetic risks and environmental exposures. This approach is currently limited by the lack of effective and efficient non-invasive medical tests to define the full range of phenotypic variation associated with individual health. Such knowledge is critical for improved early intervention, for better treatment decisions, and for ameliorating the steadily worsening epidemic of chronic disease. We present proof-of-concept experiments to demonstrate how routinely acquired cross-sectional CT imaging may be used to predict patient longevity as a proxy for overall individual health and disease status using computer image analysis techniques. Despite the limitations of a modest dataset and the use of off-the-shelf machine learning methods, our results are comparable to previous ‘manual’ clinical methods for longevity prediction. This work demonstrates that radiomics techniques can be used to extract biomarkers relevant to one of the most widely used outcomes in epidemiological and clinical research - mortality, and that deep learning with convolutional neural networks can be usefully applied to radiomics research. Computer image analysis applied to routinely collected medical images offers substantial potential to enhance precision medicine initiatives.
Alcohol is known to facilitate memory if given after learning information in the laboratory; we aimed to investigate whether this effect can be found when alcohol is consumed in a naturalistic setting. Eighty-eight social drinkers were randomly allocated to either an alcohol self-dosing or a sober condition. The study assessed both retrograde facilitation and alcohol induced memory impairment using two independent tasks. In the retrograde task, participants learnt information in their own homes, and then consumed alcohol ad libitum. Participants then undertook an anterograde memory task of alcohol impairment when intoxicated. Both memory tasks were completed again the following day. Mean amount of alcohol consumed was 82.59 grams over the evening. For the retrograde task, as predicted, both conditions exhibited similar performance on the memory task immediately following learning (before intoxication) yet performance was better when tested the morning after encoding in the alcohol condition only. The anterograde task did not reveal significant differences in memory performance post-drinking. Units of alcohol drunk were positively correlated with the amount of retrograde facilitation the following morning. These findings demonstrate the retrograde facilitation effect in a naturalistic setting, and found it to be related to the self-administered grams of alcohol.
There is a gap in knowledge about the mechanisms of sports-related brain injuries. The objective of this study was to determine the mechanisms of brain injuries among children and youth participating in team sports.
Patient safety measurement remains a global challenge. Patients are an important but neglected source of learning; however, little is known about what patients can add to our understanding of safety. We sought to understand the incidence and nature of patient-reported safety concerns in hospital.
Cognitive science has long shown interest in expertise, in part because prediction and control of expert development would have immense practical value. Most studies in this area investigate expertise by comparing experts with novices. The reliance on contrastive samples in studies of human expertise only yields deep insight into development where differences are important throughout skill acquisition. This reliance may be pernicious where the predictive importance of variables is not constant across levels of expertise. Before the development of sophisticated machine learning tools for data mining larger samples, and indeed, before such samples were available, it was difficult to test the implicit assumption of static variable importance in expertise development. To investigate if this reliance may have imposed critical restrictions on the understanding of complex skill development, we adopted an alternative method, the online acquisition of telemetry data from a common daily activity for many: video gaming. Using measures of cognitive-motor, attentional, and perceptual processing extracted from game data from 3360 Real-Time Strategy players at 7 different levels of expertise, we identified 12 variables relevant to expertise. We show that the static variable importance assumption is false - the predictive importance of these variables shifted as the levels of expertise increased - and, at least in our dataset, that a contrastive approach would have been misleading. The finding that variable importance is not static across levels of expertise suggests that large, diverse datasets of sustained cognitive-motor performance are crucial for an understanding of expertise in real-world contexts. We also identify plausible cognitive markers of expertise.