Facial Expressions
facial expressions
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
Encoding of dynamic facial expressions in the macaque superior temporal sulcus
The Effects of Negative Emotions on Mental Representation of Faces
Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.
Exploring emotion in the expression of ape gesture
Language appears to be the most complex system of animal communication described to date. However, its precursors were present in the communication of our evolutionary ancestors and are likely shared by our modern ape cousins. All great apes, including humans, employ a rich repertoire of vocalizations, facial expressions, and gestures. Great ape gestural repertoires are particularly elaborate, with ape species employing over 80 different gesture types intentionally: that is towards a recipient with a specific goal in mind. Intentional usage allows us to ask not only what information is encoded in ape gestures, but what do apes mean when they use them. I will discuss recent research on ape gesture, on how we approach the question of decoding meaning, and how with new methods we are starting to integrate long overlooked aspects of ape gesture such as group and individual variation, and expression and emotion into our study of these signals.
Brain-body interactions that modulate fear
In most animals including in humans, emotions occur together with changes in the body, such as variations in breathing or heart rate, sweaty palms, or facial expressions. It has been suggested that this interoceptive information acts as a feedback signal to the brain, enabling adaptive modulation of emotions that is essential for survival. As such, fear, one of our basic emotions, must be kept in a functional balance to minimize risk-taking while allowing for the pursuit of essential needs. However, the neural mechanisms underlying this adaptive modulation of fear remain poorly understood. In this talk, I want to present and discuss the data from my PhD work where we uncover a crucial role for the interoceptive insular cortex in detecting changes in heart rate to maintain an equilibrium between the extinction and maintenance of fear memories in mice.
Neural and computational principles of the processing of dynamic faces and bodies
Body motion is a fundamental signal of social communication. This includes facial as well as full-body movements. Combining advanced methods from computer animation with motion capture in humans and monkeys, we synthesized highly-realistic monkey avatar models. Our face avatar is perceived by monkeys as almost equivalent to a real animal, and does not induce an ‘uncanny valley effect’, unlike all other previously used avatar models in studies with monkeys. Applying machine-learning methods for the control of motion style, we were able to investigate how species-specific shape and dynamic cues influence the perception of human and monkey facial expressions. Human observers showed very fast learning of monkey expressions, and a perceptual encoding of expression dynamics that was largely independent of facial shape. This result is in line with the fact that facial shape evolved faster than the neuromuscular control in primate phylogenesis. At the same time, it challenges popular neural network models of the recognition of dynamic faces that assume a joint encoding of facial shape and dynamics. We propose an alternative physiologically-inspired neural model that realizes such an orthogonal encoding of facial shape and expression from video sequences. As second example, we investigated the perception of social interactions from abstract stimuli, similar to the ones by Heider & Simmel (1944), and also from more realistic stimuli. We developed and validated a new generative model for the synthesis of such social interaction, which is based on a modification of human navigation model. We demonstrate that the recognition of such stimuli, including the perception of agency, can be accounted for by a relatively elementary physiologically-inspired hierarchical neural recognition model, that does not require the assumption of sophisticated inference mechanisms, as postulated by some cognitive theories of social recognition. Summarizing, this suggests that essential phenomena in social cognition might be accounted for by a small set of simple neural principles that can be easily implemented by cortical circuits. The developed technologies for stimulus control form the basis of electrophysiological studies that can verify specific neural circuits, as the ones proposed by our theoretical models.