Latest

SeminarPsychology

Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake

Casey Becker
University of Pittsburgh
Apr 16, 2025

Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.

SeminarPsychology

Error Consistency between Humans and Machines as a function of presentation duration

Thomas Klein
Eberhard Karls Universität Tübingen
Jul 1, 2024

Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.

SeminarPsychology

How do visual abilities relate to each other?

Simona Garobbio
EPFL
Dec 7, 2022

In vision, there is, surprisingly, very little evidence of common factors. Most studies have found only weak correlations between performance in different visual tests; meaning that, a participant performing better in one test is not more likely to perform also better in another test. Likewise in ageing, cross-sectional studies have repeatedly shown that older adults show deteriorated performance in most visual tests compared to young adults. However, within the older population, there is no evidence for a common factor underlying visual abilities. To investigate further the decline of visual abilities, we performed a longitudinal study with a battery of nine visual tasks three times, with two re-tests after about 4 and 7 years. Most visual abilities are rather stable across 7 years, but not visual acuity. I will discuss possible causes of these paradoxical outcomes.

SeminarPsychology

Investigating visual recognition and the temporal lobes using electrophysiology and fast periodic visual stimulation

Angelique Volfart
University of Louvain
Jun 24, 2021

The ventral visual pathway extends from the occipital to the anterior temporal regions, and is specialized in giving meaning to objects and people that are perceived through vision. Numerous studies in functional magnetic resonance imaging have focused on the cerebral basis of visual recognition. However, this technique is susceptible to magnetic artefacts in ventral anterior temporal regions and it has led to an underestimation of the role of these regions within the ventral visual stream, especially with respect to face recognition and semantic representations. Moreover, there is an increasing need for implicit methods assessing these functions as explicit tasks lack specificity. In this talk, I will present three studies using fast periodic visual stimulation (FPVS) in combination with scalp and/or intracerebral EEG to overcome these limitations and provide high SNR in temporal regions. I will show that, beyond face recognition, FPVS can be extended to investigate semantic representations using a face-name association paradigm and a semantic categorisation paradigm with written words. These results shed new light on the role of temporal regions and demonstrate the high potential of the FPVS approach as a powerful electrophysiological tool to assess various cognitive functions in neurotypical and clinical populations.

SeminarPsychology

Exploring Memories of Scenes

Nico Broers
Westfälische Wilhelms-Universität Münster
Mar 25, 2021

State-of-the-art machine vision models can predict human recognition memory for complex scenes with astonishing accuracy. In this talk I present work that investigated how memorable scenes are actually remembered and experienced by human observers. We found that memorable scenes were recognized largely based on recollection of specific episodic details but also based on familiarity for an entire scene. I thus highlight current limitations in machine vision models emulating human recognition memory, with promising opportunities for future research. Moreover, we were interested in what observers specifically remember about complex scenes. We thus considered the functional role of eye-movements as a window into the content of memories, particularly when observers recollected specific information about a scene. We found that when observers formed a memory representation that they later recollected (compared to scenes that only felt familiar), the overall extent of exploration was broader, with a specific subset of fixations clustered around later to-be-recollected scene content, irrespective of the memorability of a scene. I discuss the critical role that our viewing behavior plays in visual memory formation and retrieval and point to potential implications for machine vision models predicting the content of human memories.

Vision coverage

6 items

Seminar6
Domain spotlight

Explore how Vision research is advancing inside Psychology.

Visit domain