Biological Motion
biological motion
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
Heading perception in crowded environments
Self-motion through a visual world creates a pattern of expanding visual motion called optic flow. Heading estimation from the optic flow is accurate in rigid environments. But it becomes challenging when other humans introduce an independent motion to the scene. The biological motion of human walkers consists of translation through space and associated limb articulation. The characteristic motion pattern is regular, though complex. A world full of humans moving around is nonrigid, causing heading errors. But limb articulation alone does not perturb the global structure of the flow field, matching the rigidity assumption. For heading perception from optic flow analysis, limb articulation alone should not impair heading estimates. But we observed heading biases when participants encountered a group of point-light walkers. Our research investigates the interactions between optic flow perception and biological motion perception. We further analyze the impact of environmental information.
Neurocognitive mechanisms of proactive temporal attention: challenging oscillatory and cortico-centered models
To survive in a rapidly dynamic world, the brain predicts the future state of the world and proactively adjusts perception, attention and action. A key to efficient interaction is to predict and prepare to not only “where” and “what” things will happen, but also to “when”. I will present studies in healthy and neurological populations that investigated the cognitive architecture and neural basis of temporal anticipation. First, influential ‘entrainment’ models suggest that anticipation in rhythmic contexts, e.g. music or biological motion, uniquely relies on alignment of attentional oscillations to external rhythms. Using computational modeling and EEG, I will show that cortical neural patterns previously associated with entrainment in fact overlap with interval timing mechanisms that are used in aperiodic contexts. Second, temporal prediction and attention have commonly been associated with cortical circuits. Studying neurological populations with subcortical degeneration, I will present data that point to a double dissociation between rhythm- and interval-based prediction in the cerebellum and basal ganglia, respectively, and will demonstrate a role for the cerebellum in attentional control of perceptual sensitivity in time. Finally, using EEG in neurodegenerative patients, I will demonstrate that the cerebellum controls temporal adjustment of cortico-striatal neural dynamics, and use computational modeling to identify cerebellar-controlled neural parameters. Altogether, these findings reveal functionally and neural context-specificity and subcortical contributions to temporal anticipation, revising our understanding of dynamic cognition.
Enhanced perception and cognition in deaf sign language users: EEG and behavioral evidence
In this talk, Dr. Quandt will share results from behavioral and cognitive neuroscience studies from the past few years of her work in the Action & Brain Lab, an EEG lab at Gallaudet University, the world's premiere university for deaf and hard-of-hearing students. These results will center upon the question of how extensive knowledge of signed language changes, and in some cases enhances, people's perception and cognition. Evidence for this effect comes from studies of human biological motion using point light displays, self-report, and studies of action perception. Dr. Quandt will also discuss some of the lab's efforts in designing and testing a virtual reality environment in which users can learn American Sign Language from signing avatars (virtual humans).