Expressions
expressions
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Modeling idiosyncratic evaluation of faces
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
Encoding of dynamic facial expressions in the macaque superior temporal sulcus
Two sides of emotion expressions: Readouts and Regulators
The Effects of Negative Emotions on Mental Representation of Faces
Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.
Exploring emotion in the expression of ape gesture
Language appears to be the most complex system of animal communication described to date. However, its precursors were present in the communication of our evolutionary ancestors and are likely shared by our modern ape cousins. All great apes, including humans, employ a rich repertoire of vocalizations, facial expressions, and gestures. Great ape gestural repertoires are particularly elaborate, with ape species employing over 80 different gesture types intentionally: that is towards a recipient with a specific goal in mind. Intentional usage allows us to ask not only what information is encoded in ape gestures, but what do apes mean when they use them. I will discuss recent research on ape gesture, on how we approach the question of decoding meaning, and how with new methods we are starting to integrate long overlooked aspects of ape gesture such as group and individual variation, and expression and emotion into our study of these signals.
Brain-body interactions that modulate fear
In most animals including in humans, emotions occur together with changes in the body, such as variations in breathing or heart rate, sweaty palms, or facial expressions. It has been suggested that this interoceptive information acts as a feedback signal to the brain, enabling adaptive modulation of emotions that is essential for survival. As such, fear, one of our basic emotions, must be kept in a functional balance to minimize risk-taking while allowing for the pursuit of essential needs. However, the neural mechanisms underlying this adaptive modulation of fear remain poorly understood. In this talk, I want to present and discuss the data from my PhD work where we uncover a crucial role for the interoceptive insular cortex in detecting changes in heart rate to maintain an equilibrium between the extinction and maintenance of fear memories in mice.
Stability-Flexibility Dilemma in Cognitive Control: A Dynamical System Perspective
Constraints on control-dependent processing have become a fundamental concept in general theories of cognition that explain human behavior in terms of rational adaptations to these constraints. However, theories miss a rationale for why such constraints would exist in the first place. Recent work suggests that constraints on the allocation of control facilitate flexible task switching at the expense of the stability needed to support goal-directed behavior in face of distraction. We formulate this problem in a dynamical system, in which control signals are represented as attractors and in which constraints on control allocation limit the depth of these attractors. We derive formal expressions of the stability-flexibility tradeoff, showing that constraints on control allocation improve cognitive flexibility but impair cognitive stability. We provide evidence that human participants adapt higher constraints on the allocation of control as the demand for flexibility increases but that participants deviate from optimal constraints. In continuing work, we are investigating how collaborative performance of a group of individuals can benefit from individual differences defined in terms of balance between cognitive stability and flexibility.
Sensory-motor control, cognition and brain evolution: exploring the links
Drawing on recent findings from evolutionary anthropology and neuroscience, professor Barton will lead us through the amazing story of the evolution of human cognition. Usingstatistical, phylogenetic analyses that tease apart the variation associated with different neural systems and due to different selection pressures, he will be addressing intriguing questions like ‘Why are there so many neurons in the cerebellum?’, ‘Is the neocortex the ‘intelligent’ bit of the brain?’, and ‘What explains that the recognition by humans of emotional expressions is disrupted by trancranial magnetic stimulation of the somatosensory cortex?’ Could, as professor Barton suggests, the cerebellum -modestly concealed beneath the volumetrically dominating neocortex and largely ignored- turn out to be the Cinderella of the study of brain evolution?
Context and Comparison During Open-Ended Induction
A key component of humans' striking creativity in solving problems is our ability to construct novel descriptions to help us characterize novel categories. Bongard problems, which challenge the problem solver to come up with a rule for distinguishing visual scenes that fall into two categories, provide an elegant test of this ability. Bongard problems are challenging for both human and machine category learners because only a handful of example scenes are presented for each category, and they often require the open-ended creation of new descriptions. A new sub-type of Bongard problem called Physical Bongard Problems (PBPs) is introduced, which require solvers to perceive and predict the physical spatial dynamics implicit in the depicted scenes. The PATHS (Perceiving And Testing Hypotheses on Structures) computational model which can solve many PBPs is presented, and compared to human performance on the same problems. PATHS and humans are similarly affected by the ordering of scenes within a PBP, with spatially and temporally juxtaposed scenes promoting category learning when they are similar and belong to different categories, or dissimilar and belong to the same category. The core theoretical commitments of PATHS which we believe to also exemplify human open-ended category learning are a) the continual perception of new scene descriptions over the course of category learning; b) the context-dependent nature of that perceptual process, in which the scenes establish the context for one another; c) hypothesis construction by combining descriptions into logical expressions; and d) bi-directional interactions between perceiving new aspects of scenes and constructing hypotheses for the rule that distinguishes categories.
What is serially-dependent perception good for?
Perception can be strongly serially-dependent (i.e. biased toward previously seen stimuli). Recently, serial dependencies in perception were proposed as a mechanism for perceptual stability, increasing the apparent continuity of the complex environments we experience in everyday life. For example, stable scene perception can be actively achieved by the visual system through global serial dependencies, a special kind of serial dependence between summary statistical representations. Serial dependence occurs also between emotional expressions, but it is highly selective for the same identity. Overall, these results further support the notion of serial dependence as a global, highly specialized, and purposeful mechanism. However, serial dependence could also be a deleterious phenomenon in unnatural or unpredictable situations, such as visual search in radiological scans, biasing current judgments toward previous ones even when accurate and unbiased perception is needed. For example, observers make consistent perceptual errors when classifying a tumor- like shape on the current trial, seeing it as more similar to the shape presented on the previous trial. In a separate localization test, observers make consistent errors when reporting the perceived position of an objects on the current trial, mislocalizing it toward the position in the preceding trial. Taken together, these results show two opposite sides of serial dependence; it can be a beneficial mechanism which promotes perceptual stability, but at the same time a deleterious mechanism which impairs our percept when fine recognition is needed.
Neural and computational principles of the processing of dynamic faces and bodies
Body motion is a fundamental signal of social communication. This includes facial as well as full-body movements. Combining advanced methods from computer animation with motion capture in humans and monkeys, we synthesized highly-realistic monkey avatar models. Our face avatar is perceived by monkeys as almost equivalent to a real animal, and does not induce an ‘uncanny valley effect’, unlike all other previously used avatar models in studies with monkeys. Applying machine-learning methods for the control of motion style, we were able to investigate how species-specific shape and dynamic cues influence the perception of human and monkey facial expressions. Human observers showed very fast learning of monkey expressions, and a perceptual encoding of expression dynamics that was largely independent of facial shape. This result is in line with the fact that facial shape evolved faster than the neuromuscular control in primate phylogenesis. At the same time, it challenges popular neural network models of the recognition of dynamic faces that assume a joint encoding of facial shape and dynamics. We propose an alternative physiologically-inspired neural model that realizes such an orthogonal encoding of facial shape and expression from video sequences. As second example, we investigated the perception of social interactions from abstract stimuli, similar to the ones by Heider & Simmel (1944), and also from more realistic stimuli. We developed and validated a new generative model for the synthesis of such social interaction, which is based on a modification of human navigation model. We demonstrate that the recognition of such stimuli, including the perception of agency, can be accounted for by a relatively elementary physiologically-inspired hierarchical neural recognition model, that does not require the assumption of sophisticated inference mechanisms, as postulated by some cognitive theories of social recognition. Summarizing, this suggests that essential phenomena in social cognition might be accounted for by a small set of simple neural principles that can be easily implemented by cortical circuits. The developed technologies for stimulus control form the basis of electrophysiological studies that can verify specific neural circuits, as the ones proposed by our theoretical models.
Neuroscience Investigations in the Virgin Lands of African Biodiversity
Africa is blessed with a rich diversity and abundance in rodent and avian populations. This natural endowment on the continent portends research opportunities to study unique anatomical profiles and investigate animal models that may confer better neural architecture to study neurodegenerative diseases, adult neurogenesis, stroke and stem cell therapies. To this end, African researchers are beginning to pay closer attention to some of her indigenous rodents and birds in an attempt to develop spontaneous laboratory models for homegrown neuroscience-based research. For this presentation, I will be showing studies in our lab, involving cellular neuroanatomy of two rodents, the African giant rat (AGR) and Greater cane rat (GCR), Eidolon Bats (EB) and also the Striped Owl (SO). Using histological stains (Cresyl violet and Rapid Golgi) and immunohistochemical biomarkers (GFAP, NeuN, CNPase, Iba-1, Collagen 2, Doublecortin, Ki67, Calbindin, etc), and Electron Microscopy, morphology and functional organizations of neuronal and glial populations of the AGR , GCR, EB and SO brains have been described, with our work ongoing. In addition, the developmental profiles of the prenatal GCR brains have been chronicled across its entire gestational period. Brains of embryos/foetuses were harvested for gross morphological descriptions and then processed using immunofluorescence biomarkers to determine the pattern, onset, duration and peak of neurogenesis (Pax6, Tbr1, Tbr2, NF, HuCD, MAP2) and the onset and peak of glial cell expressions and myelination in the prenatal GCR. The outcome of these research efforts has shown unique neuroanatomical expressions and networks amongst Africa’s rich biodiversity. It is hopeful that continuous effort in this regard will provide sufficient basic research data on neural developments and cellular neuroanatomy with subsequent translational consequences.
Gene expressions related to hippocampal ripples
FENS Forum 2024
Impact of carnosine supplementation on cellular expressions of brain- and glial cell line-derived neurotrophic factors in lumbar and cervical enlargements after thoracic spinal cord injury
FENS Forum 2024