Face Processing
face processing
An Ecological and Objective Neural Marker of Implicit Unfamiliar Identity Recognition
We developed a novel paradigm measuring implicit identity recognition using Fast Periodic Visual Stimulation (FPVS) with EEG among 16 students and 12 police officers with normal face processing abilities. Participants' neural responses to a 1-Hz tagged oddball identity embedded within a 6-Hz image stream revealed implicit recognition with high-quality mugshots but not CCTV-like images, suggesting optimal resolution requirements. Our findings extend previous research by demonstrating that even unfamiliar identities can elicit robust neural recognition signatures through brief, repeated passive exposure. This approach offers potential for objective validation of face processing abilities in forensic applications, including assessment of facial examiners, Super-Recognisers, and eyewitnesses, potentially overcoming limitations of traditional behavioral assessment methods.
Investigating face processing impairments in Developmental Prosopagnosia: Insights from behavioural tasks and lived experience
The defining characteristic of development prosopagnosia is severe difficulty recognising familiar faces in everyday life. Numerous studies have reported that the condition is highly heterogeneous in terms of both presentation and severity with many mixed findings in the literature. I will present behavioural data from a large face processing test battery (n = 24 DPs) as well as some early findings from a larger survey of the lived experience of individuals with DP and discuss how insights from individuals' real-world experience can help to understand and interpret lab-based data.
The contribution of mental face representations to individual face processing abilities
People largely differ with respect to how well they can learn, memorize, and perceive faces. In this talk, I address two potential sources of variation. One factor might be people’s ability to adapt their perception to the kind of faces they are currently exposed to. For instance, some studies report that those who show larger adaptation effects are also better at performing face learning and memory tasks. Another factor might be people’s sensitivity to perceive fine differences between similar-looking faces. In fact, one study shows that the brain of good performers in a face memory task shows larger neural differences between similar-looking faces. Capitalizing on this body of evidence, I present a behavioural study where I explore the relationship between people’s perceptual adaptability and sensitivity and their individual face processing performance.
Commonly used face cognition tests yield low reliability and inconsistent performance: Implications for test design, analysis, and interpretation of individual differences data
Unfamiliar face processing (face cognition) ability varies considerably in the general population. However, the means of its assessment are not standardised, and selected laboratory tests vary between studies. It is also unclear whether 1) the most commonly employed tests are reliable, 2) participants show a degree of consistency in their performance, 3) and the face cognition tests broadly measure one underlying ability, akin to general intelligence. In this study, we asked participants to perform eight tests frequently employed in the individual differences literature. We examined the reliability of these tests, relationships between them, consistency in participants’ performance, and used data driven approaches to determine factors underpinning performance. Overall, our findings suggest that the reliability of these tests is poor to moderate, the correlations between them are weak, the consistency in participant performance across tasks is low and that performance can be broadly split into two factors: telling faces together, and telling faces apart. We recommend that future studies adjust analyses to account for stimuli (face images) and participants as random factors, routinely assess reliability, and that newly developed tests of face cognition are examined in the context of convergent validity with other commonly used measures of face cognition ability.
Spatial Integration in Normal Face Processing and Its Breakdown in Congenital Prosopagnosia
Characterising the brain representations behind variations in real-world visual behaviour
Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.
TA domain-general dynamic framework for social perception
Initial social perceptions are often thought to reflect direct “read outs” of facial features. Instead, we outline a perspective whereby initial perceptions emerge from an automatic yet gradual process of negotiation between the perceptual cues inherent to a person (e.g., facial cues) and top-down social cognitive processes harbored within perceivers. This perspective argues that perceivers’ social-conceptual knowledge in particular can have a fundamental structuring role in perceptions, and thus how we think about social groups, emotions, or personality traits helps determine how we visually perceive them in other people. Integrative evidence from real-time behavioral paradigms (e.g., mouse-tracking), multivariate fMRI, and computational modeling will be discussed. Together, this work shows that the way we use facial cues to categorize other people into social groups (e.g., gender, race), perceive their emotion (e.g., anger), or infer their personality (e.g., trustworthiness) are all fundamentally shaped by prior social-conceptual knowledge and stereotypical assumptions. We find that these top-down impacts on initial perceptions are driven by the interplay of higher-order prefrontal regions involved in top-down predictions and lower-level fusiform regions involved in face processing. We argue that the perception of social categories, emotions, and traits from faces can all be conceived as resulting from an integrated system relying on domain-general cognitive properties. In this system, both visual and social cognitive processes are in a close exchange, and initial social perceptions emerge in part out of the structure of social-conceptual knowledge.
Super-Recognizers: facts, fallacies, and the future
Over the past decade, the domain of face identity processing has seen a surging interest in inter-individual differences, with a focus on individuals with superior skills, so-called Super-Recognizers (SRs; Ramon et al., 2019; Russell et al., 2009). Their study can provide valuable insights into brain-behavior relationships and advance our understanding of neural functioning. Despite a decade of research, and similarly to the field of developmental prosopagnosia, a consensus on diagnostic criteria for SR identification is lacking. Consequently, SRs are currently identified either inconsistently, via suboptimal individual tests, or via undocumented collections of tests. This state of the field has two major implications. Firstly, our scientific understanding of SRs will remain at best limited. Secondly, the needs of government agencies interested in deploying SRs for real-life identity verification (e.g., policing) are unlikely to be met. To counteract these issues, I suggest the following action points. Firstly, based on our and others’ work suggesting novel and challenging tests of face cognition (Bobak et al., 2019; Fysh et al., in press; Stacchi et al., 2019), and my collaborations with international security agencies, I recommend novel diagnostic criteria for SR identification. These are currently being used to screen the Berlin State Police’s >25K employees before identifying SRs via bespoke testing procedures we have collaboratively developed over the past years. Secondly, I introduce a cohort of SRs identified using these criteria, which is being studied in-depth using behavioral methods, psychophysics, eye-tracking, and neuroimaging. Finally, I suggest data acquired for these individuals should be curated to develop and share best practices with researchers and practitioners, and to gain an accurate and transparent description of SR cases to exploit their informative value.