Emotional Expression
emotional expression
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Vocal emotion perception at millisecond speed
The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.
The Effects of Negative Emotions on Mental Representation of Faces
Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.
Exploring emotion in the expression of ape gesture
Language appears to be the most complex system of animal communication described to date. However, its precursors were present in the communication of our evolutionary ancestors and are likely shared by our modern ape cousins. All great apes, including humans, employ a rich repertoire of vocalizations, facial expressions, and gestures. Great ape gestural repertoires are particularly elaborate, with ape species employing over 80 different gesture types intentionally: that is towards a recipient with a specific goal in mind. Intentional usage allows us to ask not only what information is encoded in ape gestures, but what do apes mean when they use them. I will discuss recent research on ape gesture, on how we approach the question of decoding meaning, and how with new methods we are starting to integrate long overlooked aspects of ape gesture such as group and individual variation, and expression and emotion into our study of these signals.
Sensory-motor control, cognition and brain evolution: exploring the links
Drawing on recent findings from evolutionary anthropology and neuroscience, professor Barton will lead us through the amazing story of the evolution of human cognition. Usingstatistical, phylogenetic analyses that tease apart the variation associated with different neural systems and due to different selection pressures, he will be addressing intriguing questions like ‘Why are there so many neurons in the cerebellum?’, ‘Is the neocortex the ‘intelligent’ bit of the brain?’, and ‘What explains that the recognition by humans of emotional expressions is disrupted by trancranial magnetic stimulation of the somatosensory cortex?’ Could, as professor Barton suggests, the cerebellum -modestly concealed beneath the volumetrically dominating neocortex and largely ignored- turn out to be the Cinderella of the study of brain evolution?
What is serially-dependent perception good for?
Perception can be strongly serially-dependent (i.e. biased toward previously seen stimuli). Recently, serial dependencies in perception were proposed as a mechanism for perceptual stability, increasing the apparent continuity of the complex environments we experience in everyday life. For example, stable scene perception can be actively achieved by the visual system through global serial dependencies, a special kind of serial dependence between summary statistical representations. Serial dependence occurs also between emotional expressions, but it is highly selective for the same identity. Overall, these results further support the notion of serial dependence as a global, highly specialized, and purposeful mechanism. However, serial dependence could also be a deleterious phenomenon in unnatural or unpredictable situations, such as visual search in radiological scans, biasing current judgments toward previous ones even when accurate and unbiased perception is needed. For example, observers make consistent perceptual errors when classifying a tumor- like shape on the current trial, seeing it as more similar to the shape presented on the previous trial. In a separate localization test, observers make consistent errors when reporting the perceived position of an objects on the current trial, mislocalizing it toward the position in the preceding trial. Taken together, these results show two opposite sides of serial dependence; it can be a beneficial mechanism which promotes perceptual stability, but at the same time a deleterious mechanism which impairs our percept when fine recognition is needed.