Criminal investigations often use photographic evidence to identify suspects. Here we combined robust face perception and high-resolution photography to mine face photographs for hidden information. By zooming in on high-resolution face photographs, we were able to recover images of unseen bystanders from reflections in the subjects' eyes. To establish whether these bystanders could be identified from the reflection images, we presented them as stimuli in a face matching task (Experiment 1). Accuracy in the face matching task was well above chance (50%), despite the unpromising source of the stimuli. Participants who were unfamiliar with the bystanders' faces (n = 16) performed at 71% accuracy [t(15) = 7.64, p<.0001, d = 1.91], and participants who were familiar with the faces (n = 16) performed at 84% accuracy [t(15) = 11.15, p<.0001, d = 2.79]. In a test of spontaneous recognition (Experiment 2), observers could reliably name a familiar face from an eye reflection image. For crimes in which the victims are photographed (e.g., hostage taking, child sex abuse), reflections in the eyes of the photographic subject could help to identify perpetrators.
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ - a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.
Faces convey primal information for our social life. This information is so primal that we sometimes find faces in non-face objects. Such illusory perception is called pareidolia. In this study, using infants' orientation behavior toward a sound source, we demonstrated that infants also perceive pareidolic faces. An image formed by four blobs and an outline was shown to infants with or without pure tones, and the time they spent looking at each blob was compared. Since the mouth is the unique sound source in a face and the literature has shown that infants older than 6 months already have sound-mouth association, increased looking time towards the bottom blob (pareidolic mouth area) during sound presentation indicated that they illusorily perceive a face in the image. Infants aged 10 and 12 months looked longer at the bottom blob under the upright-image condition, whereas no differences in looking time were observed for any blob under the inverted-image condition. However, 8-month-olds did not show any difference in looking time under both the upright and inverted conditions, suggesting that the perception of pareidolic faces, through sound association, comes to develop at around 8 to 10 months after birth.
- Proceedings of the National Academy of Sciences of the United States of America
- Published 7 months ago
Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.
People with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) continue to struggle to have their condition recognised as disabling in the face of public and professional prejudice and discrimination.
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain’s code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell’s firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems. PAPERCLIP.
The basis on which people make social judgments from the image of a face remains an important open problem in fields ranging from psychology to neuroscience and economics. Multiple cues from facial appearance influence the judgments that viewers make. Here we investigate the contribution of a novel cue: the change in appearance due to the perspective distortion that results from viewing distance. We found that photographs of faces taken from within personal space elicit lower investments in an economic trust game, and lower ratings of social traits (such as trustworthiness, competence, and attractiveness), compared to photographs taken from a greater distance. The effect was replicated across multiple studies that controlled for facial image size, facial expression and lighting, and was not explained by face width-to-height ratio, explicit knowledge of the camera distance, or whether the faces are perceived as typical. These results demonstrate a novel facial cue influencing a range of social judgments as a function of interpersonal distance, an effect that may be processed implicitly.
Age is a primary social dimension. We behave differently toward people as a function of how old we perceive them to be. Age perception relies on cues that are correlated with age, such as wrinkles. Here we report that aspects of facial contrast-the contrast between facial features and the surrounding skin-decreased with age in a large sample of adult Caucasian females. These same aspects of facial contrast were also significantly correlated with the perceived age of the faces. Individual faces were perceived as younger when these aspects of facial contrast were artificially increased, but older when these aspects of facial contrast were artificially decreased. These findings show that facial contrast plays a role in age perception, and that faces with greater facial contrast look younger. Because facial contrast is increased by typical cosmetics use, we infer that cosmetics function in part by making the face appear younger.
Millions of people use online dating sites each day, scanning through streams of face images in search of an attractive mate. Face images, like most visual stimuli, undergo processes whereby the current percept is altered by exposure to previous visual input. Recent studies using rapid sequences of faces have found that perception of face identity is biased towards recently seen faces, promoting identity-invariance over time, and this has been extended to perceived face attractiveness. In this paper we adapt the rapid sequence task to ask a question about mate selection pertinent in the digital age. We designed a binary task mimicking the selection interface currently popular in online dating websites in which observers typically make binary decisions (attractive or unattractive) about each face in a sequence of unfamiliar faces. Our findings show that binary attractiveness decisions are not independent: we are more likely to rate a face as attractive when the preceding face was attractive than when it was unattractive.