Concept: Cognitive science
The most exciting hypothesis in cognitive science right now is the theory that cognition is embodied. Like all good ideas in cognitive science, however, embodiment immediately came to mean six different things. The most common definitions involve the straight-forward claim that “states of the body modify states of the mind.” However, the implications of embodiment are actually much more radical than this. If cognition can span the brain, body, and the environment, then the “states of mind” of disembodied cognitive science won’t exist to be modified. Cognition will instead be an extended system assembled from a broad array of resources. Taking embodiment seriously therefore requires both new methods and theory. Here we outline four key steps that research programs should follow in order to fully engage with the implications of embodiment. The first step is to conduct a task analysis, which characterizes from a first person perspective the specific task that a perceiving-acting cognitive agent is faced with. The second step is to identify the task-relevant resources the agent has access to in order to solve the task. These resources can span brain, body, and environment. The third step is to identify how the agent can assemble these resources into a system capable of solving the problem at hand. The last step is to test the agent’s performance to confirm that agent is actually using the solution identified in step 3. We explore these steps in more detail with reference to two useful examples (the outfielder problem and the A-not-B error), and introduce how to apply this analysis to the thorny question of language use. Embodied cognition is more than we think it is, and we have the tools we need to realize its full potential.
There is a growing trend of inactivity among children, which may not only result in poorer physical health, but also poorer cognitive health. Previous research has shown that lower fitness has been related to decreased cognitive function for tasks requiring perception, memory, and cognitive control as well as lower academic achievement.
Humans in all societies form and participate in cooperative alliances. To successfully navigate an alliance-laced world, the human mind needs to detect new coalitions and alliances as they emerge, and predict which of many potential alliance categories are currently organizing an interaction. We propose that evolution has equipped the mind with cognitive machinery that is specialized for performing these functions: an alliance detection system. In this view, racial categories do not exist because skin color is perceptually salient; they are constructed and regulated by the alliance system in environments where race predicts social alliances and divisions. Early tests using adversarial alliances showed that the mind spontaneously detects which individuals are cooperating against a common enemy, implicitly assigning people to rival alliance categories based on patterns of cooperation and competition. But is social antagonism necessary to trigger the categorization of people by alliance-that is, do we cognitively link A and B into an alliance category only because they are jointly in conflict with C and D? We report new studies demonstrating that peaceful cooperation can trigger the detection of new coalitional alliances and make race fade in relevance. Alliances did not need to be marked by team colors or other perceptually salient cues. When race did not predict the ongoing alliance structure, behavioral cues about cooperative activities up-regulated categorization by coalition and down-regulated categorization by race, sometimes eliminating it. Alliance cues that sensitively regulated categorization by coalition and race had no effect on categorization by sex, eliminating many alternative explanations for the results. The results support the hypothesis that categorizing people by their race is a reversible product of a cognitive system specialized for detecting alliance categories and regulating their use. Common enemies are not necessary to erase important social boundaries; peaceful cooperation can have the same effect.
Making new breakthroughs in understanding the processes underlying human cognition may depend on the availability of very large datasets that have not historically existed in psychology and neuroscience. Lumosity is a web-based cognitive training platform that has grown to include over 600 million cognitive training task results from over 35 million individuals, comprising the largest existing dataset of human cognitive performance. As part of the Human Cognition Project, Lumosity’s collaborative research program to understand the human mind, Lumos Labs researchers and external research collaborators have begun to explore this dataset in order uncover novel insights about the correlates of cognitive performance. This paper presents two preliminary demonstrations of some of the kinds of questions that can be examined with the dataset. The first example focuses on replicating known findings relating lifestyle factors to baseline cognitive performance in a demographically diverse, healthy population at a much larger scale than has previously been available. The second example examines a question that would likely be very difficult to study in laboratory-based and existing online experimental research approaches at a large scale: specifically, how learning ability for different types of cognitive tasks changes with age. We hope that these examples will provoke the imagination of researchers who are interested in collaborating to answer fundamental questions about human cognitive performance.
In the past years, a few methods have been developed to translate human EEG to music. In 2009, PloS One 4 e5915, we developed a method to generate scale-free brainwave music where the amplitude of EEG was translated to music pitch according to the power law followed by both of them, the period of an EEG waveform is translated directly to the duration of a note, and the logarithm of the average power change of EEG is translated to music intensity according to the Fechner’s law. In this work, we proposed to adopt simultaneously-recorded fMRI signal to control the intensity of the EEG music, thus an EEG-fMRI music is generated by combining two different and simultaneous brain signals. And most importantly, this approach further realized power law for music intensity as fMRI signal follows it. Thus the EEG-fMRI music makes a step ahead in reflecting the physiological process of the scale-free brain.
According to the Intuitive Belief Hypothesis, supernatural belief relies heavily on intuitive thinking-and decreases when analytic thinking is engaged. After pointing out various limitations in prior attempts to support this Intuitive Belief Hypothesis, we test it across three new studies using a variety of paradigms, ranging from a pilgrimage field study to a neurostimulation experiment. In all three studies, we found no relationship between intuitive or analytical thinking and supernatural belief. We conclude that it is premature to explain belief in gods as ‘intuitive’, and that other factors, such as socio-cultural upbringing, are likely to play a greater role in the emergence and maintenance of supernatural belief than cognitive style.
- Proceedings of the National Academy of Sciences of the United States of America
- Published about 6 years ago
The majority of individuals evaluate themselves as superior to average. This is a cognitive bias known as the “superiority illusion.” This illusion helps us to have hope for the future and is deep-rooted in the process of human evolution. In this study, we examined the default states of neural and molecular systems that generate this illusion, using resting-state functional MRI and PET. Resting-state functional connectivity between the frontal cortex and striatum regulated by inhibitory dopaminergic neurotransmission determines individual levels of the superiority illusion. Our findings help elucidate how this key aspect of the human mind is biologically determined, and identify potential molecular and neural targets for treatment for depressive realism.
Those in 20th century philosophy, psychology, and neuroscience who have discussed the nature of skilled action have, for the most part, accepted the view that being skilled at an activity is independent of knowing facts about that activity, i.e., that skill is independent of knowledge of facts. In this paper we question this view of motor skill. We begin by situating the notion of skill in historical and philosophical context. We use the discussion to explain and motivate the view that motor skill depends upon knowledge of facts. This conclusion seemingly contradicts well-known results in cognitive science. It is natural, on the face of it, to take the case of H.M., the seminal case in cognitive neuroscience that led to the discovery of different memory systems, as providing powerful evidence for the independence of knowledge and skill acquisition. After all, H.M. seems to show that motor learning is retained even when previous knowledge about the activity has been lost. Improvements in skill generally require increased precision of selected actions, which we call motor acuity. Motor acuity may indeed not require propositional knowledge and has direct parallels with perceptual acuity. We argue, however, that reflection on the specifics of H.M.’s case, as well as other research on the nature of skill, indicates that learning to become skilled at a motor task, for example tennis, depends also on knowledge-based selection of the right actions. Thus skilled activity requires both acuity and knowledge, with both increasing with practice. The moral of our discussion ranges beyond debates about motor skill; we argue that it undermines any attempt to draw a distinction between practical and theoretical activities. While we will reject the independence of skill and knowledge, our discussion leaves open several different possible relations between knowledge and skill. Deciding between them is a task to be resolved by future research.
Mind wandering episodes have been construed as periods of “stimulus-independent” thought, where our minds are decoupled from the external sensory environment. In two experiments, we used behavioral and event-related potential (ERP) measures to determine whether mind wandering episodes can also be considered as periods of “response-independent” thought, with our minds disengaged from adjusting our behavioral outputs. In the first experiment, participants performed a motor tracking task and were occasionally prompted to report whether their attention was “on-task” or “mind wandering.” We found greater tracking error in periods prior to mind wandering vs. on-task reports. To ascertain whether this finding was due to attenuation in visual perception per se vs. a disruptive effect of mind wandering on performance monitoring, we conducted a second experiment in which participants completed a time-estimation task. They were given feedback on the accuracy of their estimations while we recorded their EEG, and were also occasionally asked to report their attention state. We found that the sensitivity of behavior and the P3 ERP component to feedback signals were significantly reduced just prior to mind wandering vs. on-task attentional reports. Moreover, these effects co-occurred with decreases in the error-related negativity elicited by feedback signals (fERN), a direct measure of behavioral feedback assessment in cortex. Our findings suggest that the functional consequences of mind wandering are not limited to just the processing of incoming stimulation per se, but extend as well to the control and adjustment of behavior.
Language production processes can provide insight into how language comprehension works and language typology-why languages tend to have certain characteristics more often than others. Drawing on work in memory retrieval, motor planning, and serial order in action planning, the Production-Distribution-Comprehension (PDC) account links work in the fields of language production, typology, and comprehension: (1) faced with substantial computational burdens of planning and producing utterances, language producers implicitly follow three biases in utterance planning that promote word order choices that reduce these burdens, thereby improving production fluency. (2) These choices, repeated over many utterances and individuals, shape the distributions of utterance forms in language. The claim that language form stems in large degree from producers' attempts to mitigate utterance planning difficulty is contrasted with alternative accounts in which form is driven by language use more broadly, language acquisition processes, or producers' attempts to create language forms that are easily understood by comprehenders. (3) Language perceivers implicitly learn the statistical regularities in their linguistic input, and they use this prior experience to guide comprehension of subsequent language. In particular, they learn to predict the sequential structure of linguistic signals, based on the statistics of previously-encountered input. Thus, key aspects of comprehension behavior are tied to lexico-syntactic statistics in the language, which in turn derive from utterance planning biases promoting production of comparatively easy utterance forms over more difficult ones. This approach contrasts with classic theories in which comprehension behaviors are attributed to innate design features of the language comprehension system and associated working memory. The PDC instead links basic features of comprehension to a different source: production processes that shape language form.