Action
Latest
Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives
Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.
Short and Synthetically Distort: Investor Reactions to Deepfake Financial News
Recent advances in artificial intelligence have led to new forms of misinformation, including highly realistic “deepfake” synthetic media. We conduct three experiments to investigate how and why retail investors react to deepfake financial news. Results from the first two experiments provide evidence that investors use a “realism heuristic,” responding more intensely to audio and video deepfakes as their perceptual realism increases. In the third experiment, we introduce an intervention to prompt analytical thinking, varying whether participants make analytical judgments about credibility or intuitive investment judgments. When making intuitive investment judgments, investors are strongly influenced by both more and less realistic deepfakes. When making analytical credibility judgments, investors are able to discern the non-credibility of less realistic deepfakes but struggle with more realistic deepfakes. Thus, while analytical thinking can reduce the impact of less realistic deepfakes, highly realistic deepfakes are able to overcome this analytical scrutiny. Our results suggest that deepfake financial news poses novel threats to investors.
A Novel Neurophysiological Approach to Assessing Distractibility within the General Population
Vulnerability to distraction varies across the general population and significantly affects one’s capacity to stay focused on and successfully complete the task at hand, whether at school, on the road, or at work. In this talk, I will begin by discussing how distractibility is typically assessed in the literature and introduce our innovative ERP approach to measuring it. Since distractibility is a cardinal symptom of ADHD, I will introduce its most widely used paper-and-pencil screening tool for the general population as external validation. Following that, I will present the Load Theory of Attention and explain how we used perceptual load to test the reliability of our neural marker of distractibility. Finally, I will highlight potential future applications of this marker in clinical and educational settings.
Error Consistency between Humans and Machines as a function of presentation duration
Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.
Gender, trait anxiety and attentional processing in healthy young adults: is a moderated moderation theory possible?
Three studies conducted in the context of PhD work (UNIL) aimed at proving evidence to address the question of potential gender differences in trait anxiety and executive control biases on behavioral efficacy. In scope were male and female non-clinical samples of adult young age that performed non-emotional tasks assessing basic attentional functioning (Attention Network Test – Interactions, ANT-I), sustained attention (Test of Variables of Attention, TOVA), and visual recognition abilities (Object in Location Recognition Task, OLRT). Results confirmed the intricate nature of the relationship between gender and health trait anxiety through the lens of their impact on processing efficacy in males and females. The possibility of a gendered theory in trait anxiety biases is discussed.
The Role of Cognitive Appraisal in the Relationship between Personality and Emotional Reactivity
Emotion is defined as a rapid psychological process involving experiential, expressive and physiological responses. These emerge following an appraisal process that involves cognitive evaluations of the environment assessing its relevance, implication, coping potential, and normative significance. It has been suggested that changes in appraisal processes lead to changes in the resulting emotional nature. Simultaneously, it was demonstrated that personality can be seen as a predisposition to feel more frequently certain emotions, but the personality-appraisal-emotional response chain is rarely fully investigated. The present project thus sought to investigate the extent to which personality traits influence certain appraisals, which in turn influence the subsequent emotional reactions via a systematic analysis of the link between personality traits of different current models, specific appraisals, and emotional response patterns at the experiential, expressive, and physiological levels. Major results include the coherence of emotion components clustering, and the centrality of the pleasantness, coping potential and consequences appraisals, in context; and the differentiated mediating role of cognitive appraisal in the relation between personality and the intensity and duration of an emotional state, and autonomic arousal, such as Extraversion-pleasantness-experience, and Neuroticism-powerlessness-arousal. Elucidating these relationships deepens our understanding of individual differences in emotional reactivity and spot routes of action on appraisal processes to modify upcoming adverse emotional responses, with a broader societal impact on clinical and non-clinical populations.
Impact of personality profiles on emotion regulation efficiency: insights on experience, expressivity and physiological arousal
People are confronted every day with internal or external stimuli that can elicit emotions. In order to avoid negative ones, or to pursue individual aims, emotions are often regulated. The available emotion regulation strategies have been previously described as efficient or inefficient, but many studies highlighted that the strategies’ efficiency may be influenced by some different aspects such as personality. In this project, the efficiency of several strategies (e.g., reappraisal, suppression, distraction, …) has been studied according to personality profiles, by using the Big Five personality model and the Maladaptive Personality Trait Model. Moreover, the strategies’ efficiency has been tested according to the main emotional responses, namely experience, expressivity and physiological arousal. Results mainly highlighted the differential impact of strategies on individuals and a slight impact of personality. An important factor seems however to be the emotion parameter we are considering, potentially revealing a complex interplay between strategy, personality, and the considered emotion response. Based on these outcomes, further clinical aspects and recommendations will be also discussed.
Are integrative, multidisciplinary, and pragmatic models possible? The #PsychMapping experience
This presentation delves into the necessity for simplified models in the field of psychological sciences to cater to a diverse audience of practitioners. We introduce the #PsychMapping model, evaluate its merits and limitations, and discuss its place in contemporary scientific culture. The #PsychMapping model is the product of an extensive literature review, initially within the realm of sport and exercise psychology and subsequently encompassing a broader spectrum of psychological sciences. This model synthesizes the progress made in psychological sciences by categorizing variables into a framework that distinguishes between traits (e.g., body structure and personality) and states (e.g., heart rate and emotions). Furthermore, it delineates internal traits and states from the externalized self, which encompasses behaviour and performance. All three components—traits, states, and the externalized self—are in a continuous interplay with external physical, social, and circumstantial factors. Two core processes elucidate the interactions among these four primary clusters: external perception, encompassing the mechanism through which external stimuli transition into internal events, and self-regulation, which empowers individuals to become autonomous agents capable of exerting control over themselves and their actions. While the model inherently oversimplifies intricate processes, the central question remains: does its pragmatic utility outweigh its limitations, and can it serve as a valuable tool for comprehending human behaviour?
Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making
How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.
Are integrative, multidisciplinary, and pragmatic models possible? The #PsychMapping experience
This presentation delves into the necessity for simplified models in the field of psychological sciences to cater to a diverse audience of practitioners. We introduce the #PsychMapping model, evaluate its merits and limitations, and discuss its place in contemporary scientific culture. The #PsychMapping model is the product of an extensive literature review, initially within the realm of sport and exercise psychology and subsequently encompassing a broader spectrum of psychological sciences. This model synthesizes the progress made in psychological sciences by categorizing variables into a framework that distinguishes between traits (e.g., body structure and personality) and states (e.g., heart rate and emotions). Furthermore, it delineates internal traits and states from the externalized self, which encompasses behaviour and performance. All three components—traits, states, and the externalized self—are in a continuous interplay with external physical, social, and circumstantial factors. Two core processes elucidate the interactions among these four primary clusters: external perception, encompassing the mechanism through which external stimuli transition into internal events, and self-regulation, which empowers individuals to become autonomous agents capable of exerting control over themselves and their actions. While the model inherently oversimplifies intricate processes, the central question remains: does its pragmatic utility outweigh its limitations, and can it serve as a valuable tool for comprehending human behaviour?
Characterising Representations of Goal Obstructiveness and Uncertainty Across Behavior, Physiology, and Brain Activity Through a Video Game Paradigm
The nature of emotions and their neural underpinnings remain debated. Appraisal theories such as the component process model propose that the perception and evaluation of events (appraisal) is the key to eliciting the range of emotions we experience. Here we study whether the framework of appraisal theories provides a clearer account for the differentiation of emotional episodes and their functional organisation in the brain. We developed a stealth game to manipulate appraisals in a systematic yet immersive way. The interactive nature of video games heightens self-relevance through the experience of goal-directed action or reaction, evoking strong emotions. We show that our manipulations led to changes in behaviour, physiology and brain activations.
10 “simple rules” for socially responsible science
Guidelines concerning the potentially harmful effects of scientific studies have historically focused on minimizing risk for participants. However, studies can also indirectly inflict harm on individuals and social groups through how they are designed, reported, and disseminated. As evidenced by recent criticisms and retractions of high-profile studies dealing with a wide variety of social issues, there is a scarcity of resources and guidance on how one can conduct research in a socially responsible manner. As such, even motivated researchers might publish work that has negative social impacts due to a lack of awareness. To address this, we proposed 10 recommendations (“simple rules”) for researchers who wish to conduct more socially responsible science. These recommendations cover major considerations throughout the life cycle of a study from inception to dissemination. They are not aimed to be a prescriptive list or a deterministic code of conduct. Rather, they are meant to help motivated scientists to reflect on their social responsibility as researchers and actively engage with the potential social impact of their research.
Perceptions of responsiveness and rejection in romantic relationships. What are the implications for individuals and relationship functioning?
From birth, human beings need to be embedded into social ties to function best, because other individuals can provide us with a sense of belonging, which is a fundamental human need. One of the closest bonds we build throughout our life is with our intimate partners. When the relationship involves intimacy and when both partners accept and support each other’s needs and goals (through perceived responsiveness) individuals experience an increase in relationship satisfaction as well as physical and mental well-being. However, feeling rejected by a partner may impair the feeling of connectedness and belonging, and affect emotional and behavioural responses. When we perceive our partner to be responsive to our needs or desires, in turn we naturally strive to respond positively and adequately to our partner’s needs and desires. This implies that individuals are interdependent, and changes in one partner prompt changes in the other. Evidence suggests that partners regulate themselves and co-regulate each other in their emotional, psychological, and physiological responses. However, such processes may threaten the relationship when partners face stressful situations or interactions, like the transition to parenthood or rejection. Therefore, in this presentation, I will provide evidence for the role of perceptions of being accepted or rejected by a significant other on individual and relationship functioning, while considering the contextual settings. The three studies presented here explore romantic relationships, and how perceptions of rejection and responsiveness from the partner impact both individuals, their physiological and their emotional responses, as well as their relationship dynamics.
Enhancing Qualitative Coding with Large Language Models: Potential and Challenges
Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.
Internet interventions targeting grief symptoms
Web-based self-help interventions for coping with prolonged grief have established their efficacy. However, few programs address recent losses and investigate the effect of self-tailoring of the content. In an international project, the text-based self-help program LIVIA was adapted and complemented with an Embodied Conversational Agent, an initial risk assessment and a monitoring tool. The new program SOLENA was evaluated in three trials in Switzerland, the Netherlands and Portugal. The aim of the trials was to evaluate the clinical efficacy for reducing grief, depression and loneliness and to examine client satisfaction and technology acceptance. The talk will present the SOLENA program and report results of the Portuguese and Dutch trial as well as preliminary results of the Swiss RCT. The ongoing Swiss trial compares a standardised to a self-tailored delivery format and analyses clinical outcomes, the helpfulness of specific content and the working alliance. Finally, lessons learned in the development and evaluation of a web-based self-help intervention for older adults will be discusses.
Exploring the Potential of High-Density Data for Neuropsychological Testing with Coregraph
Coregraph is a tool under development that allows us to collect high-density data patterns during the administration of classic neuropsychological tests such as the Trail Making Test and Clock Drawing Test. These tests are widely used to evaluate cognitive function and screen for neurodegenerative disorders, but traditional methods of data collection only yield sparse information, such as test completion time or error types. By contrast, the high-density data collected with Coregraph may contribute to a better understanding of the cognitive processes involved in executing these tests. In addition, Coregraph may potentially revolutionize the field of cognitive evaluation by aiding in the prediction of cognitive deficits and in the identification of early signs of neurodegenerative disorders such as Alzheimer's dementia. By analyzing high-density graphomotor data through techniques like manual feature engineering and machine learning, we can uncover patterns and relationships that would be otherwise hidden with traditional methods of data analysis. We are currently in the process of determining the most effective methods of feature extraction and feature analysis to develop Coregraph to its full potential.
Automated generation of face stimuli: Alignment, features and face spaces
I describe a well-tested Python module that does automated alignment and warping of faces images, and some advantages over existing solutions. An additional tool I’ve developed does automated extraction of facial features, which can be used in a number of interesting ways. I illustrate the value of wavelet-based features with a brief description of 2 recent studies: perceptual in-painting, and the robustness of the whole-part advantage across a large stimulus set. Finally, I discuss the suitability of various deep learning models for generating stimuli to study perceptual face spaces. I believe those interested in the forensic aspects of face perception may find this talk useful.
The Effects of Negative Emotions on Mental Representation of Faces
Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.
The role of top-down mechanisms in gaze perception
Humans, as a social species, have an increased ability to detect and perceive visual elements involved in social exchanges, such as faces and eyes. The gaze, in particular, conveys information crucial for social interactions and social cognition. Researchers have hypothesized that in order to engage in dynamic face-to-face communication in real time, our brains must quickly and automatically process the direction of another person's gaze. There is evidence that direct gaze improves face encoding and attention capture and that direct gaze is perceived and processed more quickly than averted gaze. These results are summarized as the "direct gaze effect". However, in the recent literature, there is evidence to suggest that the mode of visual information processing modulates the direct gaze effect. In this presentation, I argue that top-down processing, and specifically the relevance of eye features to the task, promotes the early preferential processing of direct versus indirect gaze. On the basis of several recent evidences, I propose that low task relevance of eye features will prevent differences in eye direction processing between gaze directions because its encoding will be superficial. Differential processing of direct and indirect gaze will only occur when the eyes are relevant to the task. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of typical ERP markers such as P1, N170 or P300, we will measure lateralized ERPs (lERPS) such as lateralized N170 and N2pc, which are markers of early face encoding and attentional deployment respectively. I hypothesize that the relevance of the eye feature task is crucial in the direct gaze effect and propose to revisit previous studies, which had questioned the existence of the direct gaze effect. This claim will be illustrate with different past studies and recent preliminary data of my lab. Overall, I propose a systematic evaluation of the role of top-down processing in early direct gaze perception in order to understand the impact of context on gaze perception and, at a larger scope, on social cognition.
Heading perception in crowded environments
Self-motion through a visual world creates a pattern of expanding visual motion called optic flow. Heading estimation from the optic flow is accurate in rigid environments. But it becomes challenging when other humans introduce an independent motion to the scene. The biological motion of human walkers consists of translation through space and associated limb articulation. The characteristic motion pattern is regular, though complex. A world full of humans moving around is nonrigid, causing heading errors. But limb articulation alone does not perturb the global structure of the flow field, matching the rigidity assumption. For heading perception from optic flow analysis, limb articulation alone should not impair heading estimates. But we observed heading biases when participants encountered a group of point-light walkers. Our research investigates the interactions between optic flow perception and biological motion perception. We further analyze the impact of environmental information.
Emotions and Partner Phubbing: The Role of Understanding and Validation in Predicting Anger and Loneliness
Interactions between romantic partners may be disturbed by problematic mobile phone use, i.e., phubbing. Research shows that phubbing reduces the ability to be responsive, but emotional aspects of phubbing, such as experiences of anger and loneliness, have not been explored. Anger has been linked to partner blame in negative social interactions, whereas loneliness has been associated with low social acceptance. Moreover, two aspects of partner responsiveness, understanding and validation, refer to the ability to recognize partner’s perspective and convey acceptance of their point of view, respectively. High understanding and validation by partner have been found to prevent from negative affect during social interaction. The impact of understanding and validation on emotions has not been investigated in the context of phubbing, therefore we posit the following exploratory hypotheses. (1) Participants will report higher levels of anger and loneliness on days with phubbing by partner, compared to days without; (2) understanding and validation will moderate the relationship between phubbing intensity and levels of anger and loneliness. We conducted a daily diary study over seven days. Based on a sample of 133 participants in intimate relationships and living with their partners, we analyzed the nested within and between-person data using multilevel models. Participants reported higher levels of anger and loneliness on days they experienced phubbing. Both, understanding and validation, buffer the relationship between phubbing intensity and negative experiences, and the interaction effects indicate certain nuances between the two constructs. Our research provides a unique insight into how specific mechanisms related to couple interactions may explain experiences of anger and loneliness.
Distributed and stable memory representations may lead to serial dependence
Perception and action are biased by our recent experiences. Even when a sequence of stimuli are randomly presented, responses are sometimes attracted toward the past. The mechanism of such bias, recently termed serial dependence, is still under investigation. Currently, there is mixed evidence indicating that such bias could be either from a sensory and perceptual origin or occurring only at decisional stages. In this talk, I will present recent findings from our group showing that biases are decreased when disrupting the memory trace in a premotor region in a simple visuomotor task. In addition, we have shown that this bias is stable over periods of up to 8 s. At the end, I will show ongoing analysis of a recent experiment and argue that serial dependence may rely on distributed memory representations of stimuli and task relevant features.
Enhanced perception and cognition in deaf sign language users: EEG and behavioral evidence
In this talk, Dr. Quandt will share results from behavioral and cognitive neuroscience studies from the past few years of her work in the Action & Brain Lab, an EEG lab at Gallaudet University, the world's premiere university for deaf and hard-of-hearing students. These results will center upon the question of how extensive knowledge of signed language changes, and in some cases enhances, people's perception and cognition. Evidence for this effect comes from studies of human biological motion using point light displays, self-report, and studies of action perception. Dr. Quandt will also discuss some of the lab's efforts in designing and testing a virtual reality environment in which users can learn American Sign Language from signing avatars (virtual humans).
Categories, language, and visual working memory: how verbal labels change capacity limitations
The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.
Perception, attention, visual working memory, and decision making: The complete consort dancing together
Our research investigates how processes of attention, visual working memory (VWM), and decision-making combine to translate perception into action. Within this framework, the role of VWM is to form stable representations of transient stimulus events that allow them to be identified by a decision process, which we model as a diffusion process. In psychophysical tasks, we find the capacity of VWM is well defined by a sample-size model, which attributes changes in VWM precision with set-size to differences in the number evidence samples recruited to represent stimuli. In the first part of the talk, I review evidence for the sample-size model and highlight the model's strengths: It provides a parameter-free characterization of the set-size effect; it has plausible neural and cognitive interpretations; an attention-weighted version of the model accounts for the power-law of VWM, and it accounts for the selective updating of VWM in multiple-look experiments. In the second part of the talk, I provide a characterization of the theoretical relationship between two-choice and continuous-outcome decision tasks using the circular diffusion model, highlighting their common features. I describe recent work characterizing the joint distributions of decision outcomes and response times in continuous-outcome tasks using the circular diffusion model and show that the model can clearly distinguish variable-precision and simple mixture models of the evidence entering the decision process. The ability to distinguish these kinds of processes is central to current VWM studies.
Getting to know you: emerging neural representations during face familiarization
The successful recognition of familiar persons is critical for social interactions. Despite extensive research on the neural representations of familiar faces, we know little about how such representations unfold as someone becomes familiar. In three EEG experiments, we elucidated how representations of face familiarity and identity emerge from different qualities of familiarization: brief perceptual exposure (Experiment 1), extensive media familiarization (Experiment 2) and real-life personal familiarization (Experiment 3). Time-resolved representational similarity analysis revealed that familiarization quality has a profound impact on representations of face familiarity: they were strongly visible after personal familiarization, weaker after media familiarization, and absent after perceptual familiarization. Across all experiments, we found no enhancement of face identity representation, suggesting that familiarity and identity representations emerge independently during face familiarization. Our results emphasize the importance of extensive, real-life familiarization for the emergence of robust face familiarity representations, constraining models of face perception and recognition memory.
The contribution of the dorsal visual pathway to perception and action
The human visual system enables us to recognize objects (e.g., this is a cup) and act upon them (e.g., grasp the cup) with astonishing ease and accuracy. For decades, it was widely accepted that these different functions rely on two separated cortical pathways. The ventral occipitotemporal pathway subserves object recognition, while the dorsal occipitoparietal pathway promotes visually guided actions. In my talk, I will discuss recent evidence from a series of neuropsychological, developmental and neuroimaging studies that were aimed to explore the nature of object representations in the dorsal pathway. The results from these studies highlight the plausible role of the dorsal pathway in object perception and reveal an interplay between shape representations derived by the two pathways. Together, these findings challenge the binary distinction between the two pathways and are consistent with the view that object recognition is not the sole product of ventral pathway computations, but instead relies on a distributed network of regions.
Action coverage
27 items