Criminal investigations often use photographic evidence to identify suspects. Here we combined robust face perception and high-resolution photography to mine face photographs for hidden information. By zooming in on high-resolution face photographs, we were able to recover images of unseen bystanders from reflections in the subjects' eyes. To establish whether these bystanders could be identified from the reflection images, we presented them as stimuli in a face matching task (Experiment 1). Accuracy in the face matching task was well above chance (50%), despite the unpromising source of the stimuli. Participants who were unfamiliar with the bystanders' faces (n = 16) performed at 71% accuracy [t(15) = 7.64, p<.0001, d = 1.91], and participants who were familiar with the faces (n = 16) performed at 84% accuracy [t(15) = 11.15, p<.0001, d = 2.79]. In a test of spontaneous recognition (Experiment 2), observers could reliably name a familiar face from an eye reflection image. For crimes in which the victims are photographed (e.g., hostage taking, child sex abuse), reflections in the eyes of the photographic subject could help to identify perpetrators.
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual’s ‘face-average’ - a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user’s face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Faces convey primal information for our social life. This information is so primal that we sometimes find faces in non-face objects. Such illusory perception is called pareidolia. In this study, using infants' orientation behavior toward a sound source, we demonstrated that infants also perceive pareidolic faces. An image formed by four blobs and an outline was shown to infants with or without pure tones, and the time they spent looking at each blob was compared. Since the mouth is the unique sound source in a face and the literature has shown that infants older than 6 months already have sound-mouth association, increased looking time towards the bottom blob (pareidolic mouth area) during sound presentation indicated that they illusorily perceive a face in the image. Infants aged 10 and 12 months looked longer at the bottom blob under the upright-image condition, whereas no differences in looking time were observed for any blob under the inverted-image condition. However, 8-month-olds did not show any difference in looking time under both the upright and inverted conditions, suggesting that the perception of pareidolic faces, through sound association, comes to develop at around 8 to 10 months after birth.
The basis on which people make social judgments from the image of a face remains an important open problem in fields ranging from psychology to neuroscience and economics. Multiple cues from facial appearance influence the judgments that viewers make. Here we investigate the contribution of a novel cue: the change in appearance due to the perspective distortion that results from viewing distance. We found that photographs of faces taken from within personal space elicit lower investments in an economic trust game, and lower ratings of social traits (such as trustworthiness, competence, and attractiveness), compared to photographs taken from a greater distance. The effect was replicated across multiple studies that controlled for facial image size, facial expression and lighting, and was not explained by face width-to-height ratio, explicit knowledge of the camera distance, or whether the faces are perceived as typical. These results demonstrate a novel facial cue influencing a range of social judgments as a function of interpersonal distance, an effect that may be processed implicitly.
Millions of people use online dating sites each day, scanning through streams of face images in search of an attractive mate. Face images, like most visual stimuli, undergo processes whereby the current percept is altered by exposure to previous visual input. Recent studies using rapid sequences of faces have found that perception of face identity is biased towards recently seen faces, promoting identity-invariance over time, and this has been extended to perceived face attractiveness. In this paper we adapt the rapid sequence task to ask a question about mate selection pertinent in the digital age. We designed a binary task mimicking the selection interface currently popular in online dating websites in which observers typically make binary decisions (attractive or unattractive) about each face in a sequence of unfamiliar faces. Our findings show that binary attractiveness decisions are not independent: we are more likely to rate a face as attractive when the preceding face was attractive than when it was unattractive.
Humans have a natural expertise in recognizing faces. However, the nature of the interaction between this critical visual biological skill and memory is yet unclear. Here, we had the unique opportunity to test two individuals who have had exceptional success in the World Memory Championships, including several world records in face-name association memory. We designed a range of face processing tasks to determine whether superior/expert face memory skills are associated with distinctive perceptual strategies for processing faces. Superior memorizers excelled at tasks involving associative face-name learning. Nevertheless, they were as impaired as controls in tasks probing the efficiency of the face system: face inversion and the other-race effect. Super memorizers did not show increased hippocampal volumes, and exhibited optimal generic eye movement strategies when they performed complex multi-item face-name associations. Our data show that the visual computations of the face system are not malleable and are robust to acquired expertise involving extensive training of associative memory.
The perception of facial expressions and objects at a distance are entrenched psychological research venues, but their intersection is not. We were motivated to study them together because of their joint importance in the physical composition of popular movies-shots that show a larger image of a face typically have shorter durations than those in which the face is smaller. For static images, we explore the time it takes viewers to categorize the valence of different facial expressions as a function of their visual size. In two studies, we find that smaller faces take longer to categorize than those that are larger, and this pattern interacts with local background clutter. More clutter creates crowding and impedes the interpretation of expressions for more distant faces but not proximal ones. Filmmakers at least tacitly know this. In two other studies, we show that contemporary movies lengthen shots that show smaller faces, and even more so with increased clutter.
Atypical responses to direct gaze are one of the most characteristic hallmarks of autism spectrum disorder (ASD). The cause and mechanism underlying this phenomenon, however, have remained unknown. Here we investigated whether the atypical responses to eye gaze in autism spectrum disorder is dependent on the conscious perception of others' faces. Face stimuli with direct and averted gaze were rendered invisible by interocular suppression and eye movements were recorded from participants with ASD and an age and sex matched control group. Despite complete unawareness of the stimuli, the two groups differed significantly in their eye movements to the face stimuli. In contrast to the significant positive saccadic index observed in the TD group, indicating an unconscious preference to the face with direct gaze, the ASD group had no such preference towards direct gaze and instead showed a tendency to prefer the face with averted gaze, suggesting an unconscious avoidance of eye contact. These results provide the first evidence that the atypical response to eye contact in ASD is an unconscious and involuntary response. They provide a better understanding of the mechanism of gaze avoidance in autism and might lead to new diagnostic and therapeutic interventions.
In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.
Although certain characteristics of human faces are broadly considered more attractive (e.g., symmetry, averageness), people also routinely disagree with each other on the relative attractiveness of faces. That is, to some significant degree, beauty is in the “eye of the beholder.” Here, we investigate the origins of these individual differences in face preferences using a twin design, allowing us to estimate the relative contributions of genetic and environmental variation to individual face attractiveness judgments or face preferences. We first show that individual face preferences (IP) can be reliably measured and are readily dissociable from other types of attractiveness judgments (e.g., judgments of scenes, objects). Next, we show that individual face preferences result primarily from environments that are unique to each individual. This is in striking contrast to individual differences in face identity recognition, which result primarily from variations in genes . We thus complete an etiological double dissociation between two core domains of social perception (judgments of identity versus attractiveness) within the same visual stimulus (the face). At the same time, we provide an example, rare in behavioral genetics, of a reliably and objectively measured behavioral characteristic where variations are shaped mostly by the environment. The large impact of experience on individual face preferences provides a novel window into the evolution and architecture of the social brain, while lending new empirical support to the long-standing claim that environments shape individual notions of what is attractive.