Concept: Face perception
Two rival theories of how humans recognize faces exist: (i) recognition is innate, relying on specialized neocortical circuitry, and (ii) recognition is a learned expertise, relying on general object recognition pathways. Here, we explore whether animals without a neocortex, can learn to recognize human faces. Human facial recognition has previously been demonstrated for birds, however they are now known to possess neocortex-like structures. Also, with much of the work done in domesticated pigeons, one cannot rule out the possibility that they have developed adaptations for human face recognition. Fish do not appear to possess neocortex-like cells, and given their lack of direct exposure to humans, are unlikely to have evolved any specialized capabilities for human facial recognition. Using a two-alternative forced-choice procedure, we show that archerfish (Toxotes chatareus) can learn to discriminate a large number of human face images (Experiment 1, 44 faces), even after controlling for colour, head-shape and brightness (Experiment 2, 18 faces). This study not only demonstrates that archerfish have impressive pattern discrimination abilities, but also provides evidence that a vertebrate lacking a neocortex and without an evolutionary prerogative to discriminate human faces, can nonetheless do so to a high degree of accuracy.
Cognition is one of the most flexible tools enabling adaptation to environmental variation. Living close to humans is thought to influence social as well as physical cognition of animals throughout domestication and ontogeny. Here, we investigated to what extent physical cognition and two domains of social cognition of dogs have been affected by domestication and ontogeny. To address the effects of domestication, we compared captive wolves (n = 12) and dogs (n = 14) living in packs under the same conditions. To explore developmental effects, we compared these dogs to pet dogs (n = 12) living in human families. The animals were faced with a series of object-choice tasks, in which their response to communicative, behavioural and causal cues was tested. We observed that wolves outperformed dogs in their ability to follow causal cues, suggesting that domestication altered specific skills relating to this domain, whereas developmental effects had surprisingly no influence. All three groups performed similarly in the communicative and behavioural conditions, suggesting higher ontogenetic flexibility in the two social domains. These differences across cognitive domains need to be further investigated, by comparing domestic and non-domesticated animals living in varying conditions.
The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info).
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ - a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Age is one of the most salient aspects in faces and of fundamental cognitive and social relevance. Although face processing has been studied extensively, brain regions responsive to age have yet to be localized. Using evocative face morphs and fMRI, we segregate two areas extending beyond the previously established face-sensitive core network, centered on the inferior temporal sulci and angular gyri bilaterally, both of which process changes of facial age. By means of probabilistic tractography, we compare their patterns of functional activation and structural connectivity. The ventral portion of Wernicke’s understudied perpendicular association fasciculus is shown to interconnect the two areas, and activation within these clusters is related to the probability of fiber connectivity between them. In addition, post-hoc age-rating competence is found to be associated with high response magnitudes in the left angular gyrus. Our results provide the first evidence that facial age has a distinct representation pattern in the posterior human brain. We propose that particular face-sensitive nodes interact with additional object-unselective quantification modules to obtain individual estimates of facial age. This brain network processing the age of faces differs from the cortical areas that have previously been linked to less developmental but instantly changeable face aspects. Our probabilistic method of associating activations with connectivity patterns reveals an exemplary link that can be used to further study, assess and quantify structure-function relationships.
Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.
Cartoon characters are omnipresent in popular media. While few studies have scientifically investigated their processing, in computer graphics, efforts are made to increase realism. Yet, close approximations of reality have been suggested to evoke sometimes a feeling of eeriness, the “uncanny valley” effect. Here, we used high-density electroencephalography to investigate brain responses to professionally stylized happy, angry, and neutral character faces. We employed six face-stylization levels varying from abstract to realistic and investigated the N170, early posterior negativity (EPN), and late positive potential (LPP) event-related components. The face-specific N170 showed a u-shaped modulation, with stronger reactions towards both most abstract and most realistic compared to medium-stylized faces. For abstract faces, N170 was generated more occipitally than for real faces, implying stronger reliance on structural processing. Although emotional faces elicited highest amplitudes on both N170 and EPN, on the N170 realism and expression interacted. Finally, LPP increased linearly with face realism, reflecting activity increase in visual and parietal cortex for more realistic faces. Results reveal differential effects of face stylization on distinct face processing stages and suggest a perceptual basis to the uncanny valley hypothesis. They are discussed in relation to face perception, media design, and computer graphics.
Sleep deprivation is a major source of morbidity with widespread health effects, including increased risk of hypertension, diabetes, obesity, heart attack, and stroke. Moreover, sleep deprivation brings about vehicle accidents and medical errors and is therefore an urgent topic of investigation. During sleep deprivation, homeostatic and circadian processes interact to build up sleep pressure, which results in slow behavioral performance (cognitive lapses) typically attributed to attentional thalamic and frontoparietal circuits, but the underlying mechanisms remain unclear. Recently, through study of electroencephalograms (EEGs) in humans and local field potentials (LFPs) in nonhuman primates and rodents it was found that, during sleep deprivation, regional ‘sleep-like’ slow and theta (slow/theta) waves co-occur with impaired behavioral performance during wakefulness. Here we used intracranial electrodes to record single-neuron activities and LFPs in human neurosurgical patients performing a face/nonface categorization psychomotor vigilance task (PVT) over multiple experimental sessions, including a session after full-night sleep deprivation. We find that, just before cognitive lapses, the selective spiking responses of individual neurons in the medial temporal lobe (MTL) are attenuated, delayed, and lengthened. These ‘neuronal lapses’ are evident on a trial-by-trial basis when comparing the slowest behavioral PVT reaction times to the fastest. Furthermore, during cognitive lapses, LFPs exhibit a relative local increase in slow/theta activity that is correlated with degraded single-neuron responses and with baseline theta activity. Our results show that cognitive lapses involve local state-dependent changes in neuronal activity already present in the MTL.
How does cortical tissue change as brain function and behavior improve from childhood to adulthood? By combining quantitative and functional magnetic resonance imaging in children and adults, we find differential development of high-level visual areas that are involved in face and place recognition. Development of face-selective regions, but not place-selective regions, is dominated by microstructural proliferation. This tissue development is correlated with specific increases in functional selectivity to faces, as well as improvements in face recognition, and ultimately leads to differentiated tissue properties between face- and place-selective regions in adulthood, which we validate with postmortem cytoarchitectonic measurements. These data suggest a new model by which emergent brain function and behavior result from cortical tissue proliferation rather than from pruning exclusively.
- Cortex; a journal devoted to the study of the nervous system and behavior
- Published over 3 years ago
Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants “saw” faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image (CI) that resembled a face, whereas those during letter pareidolia produced a CI that was letter-like. Further, the extent to which such behavioral CIs resembled faces was directly related to the level of face-specific activations in the rFFA. This finding suggests that the rFFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipitotemporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face.