TopicNeuro

perception

50 Seminars40 ePosters

Latest

SeminarNeuroscience

Top-down control of neocortical threat memory

Prof. Dr. Johannes Letzkus
Universität Freiburg, Germany
Nov 12, 2025

Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.

SeminarNeuroscience

Organization of thalamic networks and mechanisms of dysfunction in schizophrenia and autism

Vasileios Zikopoulos
Boston University
Nov 3, 2025

Thalamic networks, at the core of thalamocortical and thalamosubcortical communications, underlie processes of perception, attention, memory, emotions, and the sleep-wake cycle, and are disrupted in mental disorders, including schizophrenia and autism. However, the underlying mechanisms of pathology are unknown. I will present novel evidence on key organizational principles, structural, and molecular features of thalamocortical networks, as well as critical thalamic pathway interactions that are likely affected in disorders. This data can facilitate modeling typical and abnormal brain function and can provide the foundation to understand heterogeneous disruption of these networks in sleep disorders, attention deficits, and cognitive and affective impairments in schizophrenia and autism, with important implications for the design of targeted therapeutic interventions

SeminarNeuroscience

“Development and application of gaze control models for active perception”

Prof. Bert Shi
Professor of Electronic and Computer Engineering at the Hong Kong University of Science and Technology (HKUST)
Jun 12, 2025

Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.

SeminarNeuroscience

Developmental and evolutionary perspectives on thalamic function

Dr. Bruno Averbeck
National Institute of Mental Health, Maryland, USA
Jun 11, 2025

Brain organization and function is a complex topic. We are good at establishing correlates of perception and behavior across forebrain circuits, as well as manipulating activity in these circuits to affect behavior. However, we still lack good models for the large-scale organization and function of the forebrain. What are the contributions of the cortex, basal ganglia, and thalamus to behavior? In addressing these questions, we often ascribe function to each area as if it were an independent processing unit. However, we know from the anatomy that the cortex, basal ganglia, and thalamus, are massively interconnected in a large network. One way to generate insight into these questions is to consider the evolution and development of forebrain systems. In this talk, I will discuss the developmental and evolutionary (comparative anatomy) data on the thalamus, and how it fits within forebrain networks. I will address questions including, when did the thalamus appear in evolution, how is the thalamus organized across the vertebrate lineage, and how can the change in the organization of forebrain networks affect behavioral repertoires.

SeminarNeuroscience

The Unconscious Eye: What Involuntary Eye Movements Reveal About Brain Processing

Yoram Bonneh
Bar-Ilan
Jun 10, 2025
SeminarNeuroscienceRecording

Restoring Sight to the Blind: Effects of Structural and Functional Plasticity

Noelle Stiles
Rutgers University
May 22, 2025

Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.

SeminarNeuroscience

Single-neuron correlates of perception and memory in the human medial temporal lobe

Prof. Dr. Dr. Florian Mormann
University of Bonn, Germany
May 14, 2025

The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.

SeminarNeuroscienceRecording

Multisensory perception in the metaverse

Polly Dalton
Royal Holloway, University of London
May 8, 2025
SeminarNeuroscienceRecording

The hippocampus, visual perception and visual memory

Morris Moscovitch
University of Toronto
May 6, 2025
SeminarNeuroscienceRecording

Reading Scenes

Melissa Lê-Hoa Võ
Ludwig-Maximilians-Universität München
Apr 29, 2025
SeminarNeuroscienceRecording

Multisensory computations underlying flavor perception and food choice

Joost Maier
Wake Forest School of Medicine
Apr 17, 2025
SeminarNeuroscience

The representation of speech conversations in the human auditory cortex

Etienne Abassi
McGill University
Apr 3, 2025
SeminarNeuroscience

Making Sense of Sounds: Cortical Mechanisms for Dynamic Auditory Perception

Maria Geffen
University of Pennsylvania
Mar 24, 2025
SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 12, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarNeuroscience

Where are you Moving? Assessing Precision, Accuracy, and Temporal Dynamics in Multisensory Heading Perception Using Continuous Psychophysics

Björn Jörges
York University
Feb 6, 2025
SeminarNeuroscience

Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge

Jorge Almeida
University of Coimbra
Jan 28, 2025

Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.

SeminarNeuroscienceRecording

Dynamics of braille letter perception in blind readers

Santani Teng
Smith-Kettlewell Eye Research Institute
Jan 23, 2025
SeminarNeuroscienceRecording

Guiding Visual Attention in Dynamic Scenes

Nir Shalev
Haifa U
Jan 21, 2025
SeminarNeuroscienceRecording

Rethinking Attention: Dynamic Prioritization

Sarah Shomstein
George Washington University
Jan 7, 2025

Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.

SeminarNeuroscience

Mind Perception and Behaviour: A Study of Quantitative and Qualitative Effects

Alan Kingstone
University of British Columbia
Nov 19, 2024
SeminarNeuroscienceRecording

Perceptual illusions we understand well, and illusions which aren’t really illusions

Michael Bach
University of Freiburg
Nov 12, 2024
SeminarNeuroscience

Imagining and seeing: two faces of prosopagnosia

Jason Barton
University of British Columbia
Nov 5, 2024
SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 16, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarNeuroscienceRecording

Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness

Sharon Gilad-Gutnick
MIT
May 2, 2024
SeminarNeuroscienceRecording

There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm

Anne Danielsen
University of Oslo, Norway
Apr 29, 2024

How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.

SeminarNeuroscience

Perception in Autism: Testing Recent Bayesian Inference Accounts

Amit Yashar
Haifa University
Apr 16, 2024
SeminarNeuroscienceRecording

Time perception in film viewing as a function of film editing

Lydia Liapi
Panteion University
Mar 27, 2024

Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.

SeminarNeuroscienceRecording

Deepfake Detection in Super-Recognizers and Police Officers

Meike Ramon
University of Lausanne
Feb 13, 2024

Using videos from the Deepfake Detection Challenge (cf. Groh et al., 2021), we investigated human deepfake detection performance (DDP) in two unique observer groups: Super-Recognizers (SRs) and "normal" officers from within the 18K members of the Berlin Police. SRs were identified either via previously proposed lab-based procedures (Ramon, 2021) or the only existing tool for SR identification involving increasingly challenging, authentic forensic material: beSure® (Berlin Test For Super-Recognizer Identification; Ramon & Rjosk, 2022). Across two experiments we examined deepfake detection performance (DDP) in participants who judged single videos and pairs of videos in a 2AFC decision setting. We explored speed-accuracy trade-offs in DDP, compared DDP between lab-identified SRs and non-SRs, and police officers whose face identity processing skills had been extensively tested using challenging. In this talk I will discuss our surprising findings and argue that further work is needed too determine whether face identity processing is related to DDP or not.

SeminarNeuroscienceRecording

The Role of Spatial and Contextual Relations of real world objects in Interval Timing

Rania Tachmatzidou
Panteion University
Jan 29, 2024

In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.

SeminarNeuroscience

Using Adversarial Collaboration to Harness Collective Intelligence

Lucia Melloni
Max Planck Institute for Empirical Aesthetics
Jan 25, 2024

There are many mysteries in the universe. One of the most significant, often considered the final frontier in science, is understanding how our subjective experience, or consciousness, emerges from the collective action of neurons in biological systems. While substantial progress has been made over the past decades, a unified and widely accepted explanation of the neural mechanisms underpinning consciousness remains elusive. The field is rife with theories that frequently provide contradictory explanations of the phenomenon. To accelerate progress, we have adopted a new model of science: adversarial collaboration in team science. Our goal is to test theories of consciousness in an adversarial setting. Adversarial collaboration offers a unique way to bolster creativity and rigor in scientific research by merging the expertise of teams with diverse viewpoints. Ideally, we aim to harness collective intelligence, embracing various perspectives, to expedite the uncovering of scientific truths. In this talk, I will highlight the effectiveness (and challenges) of this approach using selected case studies, showcasing its potential to counter biases, challenge traditional viewpoints, and foster innovative thought. Through the joint design of experiments, teams incorporate a competitive aspect, ensuring comprehensive exploration of problems. This method underscores the importance of structured conflict and diversity in propelling scientific advancement and innovation.

SeminarNeuroscienceRecording

Recognizing Faces: Insights from Group and Individual Differences

Catherine Mondloch
Brock University
Jan 23, 2024
SeminarNeuroscienceRecording

Bayesian expectation in the perception of the timing of stimulus sequences

Max De Luca
University of Birmingham
Dec 13, 2023

In the current virtual journal club Dr Di Luca will present findings from a series of psychophysical investigations where he measured sensitivity and bias in the perception of the timing of stimuli. He will present how improved detection with longer sequences and biases in reporting isochrony can be accounted for by optimal statistical predictions. Among his findings was also that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted to appear more regular. Such change depends on whether the context these sequences are presented is also regular. Dr Di Luca will present a Bayesian model for the combination of dynamically updated expectations, in the form of a priori probability, with incoming sensory information. These findings contribute to the understanding of how the brain processes temporal information to shape perceptual experiences.

SeminarNeuroscienceRecording

Multisensory perception, learning, and memory

Ladan Shams
UCLA
Dec 7, 2023

Note the later start time!

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Paul Scotti
Dec 7, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812

SeminarNeuroscienceRecording

Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment

Mattia Rosso
Ghent University, IPEM Institute for Systematic Musicology
Nov 29, 2023

Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.

SeminarNeuroscience

Varying the Effectiveness of Scene Context

Monica Castelhano
Queen’s University
Nov 28, 2023
SeminarNeuroscienceRecording

Multisensory integration in peripersonal space (PPS) for action, perception and consciousness

Andrea Serino
University Hospital of Lausanne
Nov 2, 2023

Note the later time in the USA!

SeminarNeuroscience

Predictive processing in older adults: How does it shape perception and sensorimotor control?

Jutta Billino
JLU Giessen
Oct 31, 2023
SeminarNeuroscienceRecording

Visual-vestibular cue comparison for perception of environmental stationarity

Paul MacNeilage
University of Nevada, Reno
Oct 26, 2023

Note the later time!

SeminarNeuroscience

Vocal emotion perception at millisecond speed

Ana Pinehiro
University of Lisbon
Oct 17, 2023

The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.

SeminarNeuroscienceRecording

Generating parallel representations of position and identity in the olfactory system

Dana Galili
MRC Laboratory of Molecular Biology
Oct 12, 2023
SeminarNeuroscienceRecording

Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception

Ashley Schormans
BrainsCAN, Western University, Canada.
Sep 27, 2023

To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.

SeminarNeuroscience

Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?

Timo Kvamme
Aarhus University
Aug 8, 2023

Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.

SeminarNeuroscience

Vision for Real-Time Interactions with Objects and People

Maryam Vaziri Pashkam
NIMH
Jun 27, 2023
SeminarNeuroscienceRecording

Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness

Sharon Gilad-Gutnick
MIT
Jun 20, 2023

Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.

ePosterNeuroscience

Bayesian inference and arousal modulation in spatial perception to mitigate stochasticity and volatility

David Meijer, Fabian Dorok, Roberto Barumerli, Burcu Bayram, Michelle Spierings, Ulrich Pomper, Robert Baumgartner

Bernstein Conference 2024

ePosterNeuroscience

Computational mechanisms of odor perception and representational drift in rodent olfactory systems

Alexander Roxin, Licheng Zou

Bernstein Conference 2024

ePosterNeuroscience

Dynamic perception in volatile environments: How relevant is the prior?

David Meijer, Roberto Barumerli, Robert Baumgartner

Bernstein Conference 2024

ePosterNeuroscience

Feature-based letter perception – A neurocognitive plausible, transparent model approach

Janos Pauli, Benjamin Gagl

Bernstein Conference 2024

ePosterNeuroscience

Modeling spatial and temporal attractive and repulsive biases in perception

Stefan Glasauer, W. Medendorp, Michel-Ange Amorim

Bernstein Conference 2024

ePosterNeuroscience

Awake perception is associated with dedicated neuronal assemblies in cerebral cortex

Anton Filipchuk,Alain Destexhe,Brice Bathellier

COSYNE 2022

ePosterNeuroscience

Causal inference can explain hierarchical motion perception and is reflected in neural responses in MT

Sabyasachi Shivkumar,Zhexin Xu,Gábor Lengyel,Gregory DeAngelis,Ralf Haefner

COSYNE 2022

ePosterNeuroscience

The interplay between prediction and integration processes in human perception

Alexandre Hyafil,Pau Blanco-Arnau

COSYNE 2022

ePosterNeuroscience

The interplay between prediction and integration processes in human perception

Alexandre Hyafil,Pau Blanco-Arnau

COSYNE 2022

ePosterNeuroscience

Isolated correlates of somatosensory perception in the posterior mouse cortex

Michael Sokoletsky,David Ungarish,Ilan Lampl

COSYNE 2022

ePosterNeuroscience

Isolated correlates of somatosensory perception in the posterior mouse cortex

Michael Sokoletsky,David Ungarish,Ilan Lampl

COSYNE 2022

ePosterNeuroscience

Structure in motion: visual motion perception as online hierarchical inference

Johannes Bill,Samuel J. Gershman,Jan Drugowitsch

COSYNE 2022

ePosterNeuroscience

Structure in motion: visual motion perception as online hierarchical inference

Johannes Bill,Samuel J. Gershman,Jan Drugowitsch

COSYNE 2022

ePosterNeuroscience

Beyond perception: the sensory cortex as an associative engine during goal-directed learning

Celine Drieu, Ziyi Zhu, Kylie Fuller, Aaron Wang, Sarah Elnozahy, Kishore Kuchibhotla

COSYNE 2023

ePosterNeuroscience

Dissecting cortical and subcortical contributions to perception with white noise optogenetic inhibition

Jackson Cone, Autumn Mitchell, Rachel Parker, John Maunsell

COSYNE 2023

ePosterNeuroscience

Divisive normalization as a mechanism for hierarchical causal inference in motion perception

Boris Penaloza, Sabyasachi Shivkumar, Gabor Lengyel, Linghao Xu, Gregory DeAngelis, Ralf Haefner

COSYNE 2023

ePosterNeuroscience

Many perception tasks are highly redundant functions of their input data

Rahul Ramesh, Anthony Bisulco, Ronald DiTullio, Linran Wei, Vijay Balasubramanian, Kostas Daniilidis, Pratik Chaudhari

COSYNE 2025

ePosterNeuroscience

Mapping social perception to social behavior using artificial neural networks

Nate Dolensek, Doris Tsao, Shi Chen

COSYNE 2025

ePosterNeuroscience

Active tool-use training in near and far distances does not change time perception in peripersonal or far space

Amir Jahanian Najafabadi, Christoph Kayser

FENS Forum 2024

ePosterNeuroscience

Association of hallucinogen persisting perception disorder with trait neuroticism and mental health symptoms

Morgan Hadley, Alicia Halliday, James Stone

FENS Forum 2024

ePosterNeuroscience

Bayesian inference during implicit perceptual belief updating in dynamic auditory perception

David Meijer, Fabian Dorok, Roberto Barumerli, Burcu Bayram, Michelle Spierings, Ulrich Pomper, Robert Baumgartner

FENS Forum 2024

ePosterNeuroscience

Bayesian perceptual adaptation in auditory motion perception: A multimodal approach with EEG and pupillometry

Roman Fleischmann, Burcu Bayram, David Meijer, Roberto Barumerli, Michelle Spierings, Ulrich Pomper, Robert Baumgartner

FENS Forum 2024

ePosterNeuroscience

Brainwide transformation of neural signals underlying perception

Blake Russell, Robert Lees, Adam Packer, Armin Lak

FENS Forum 2024

ePosterNeuroscience

A circuit mechanism for hunger-state dependent shifts in perception in the pond snail Lymnaea stagnalis

Michael Crossley, György Kemenes, Kevin Staras

FENS Forum 2024

ePosterNeuroscience

Community-regulated ethics: Perception and resolution of ethical conflicts by online communities

Chinnapat Chanprom, Laddawan Kaewkitipong, Matthieu Guitton

FENS Forum 2024

ePosterNeuroscience

Distinct effects of spatial summation and lateral inhibition in cold and warm perception

Camilla Eva Krænge, Malthe B. Sørensen, Arthur S. Courtin, Jesper F. Ehmsen, Micah G. Allen, Francesca Fardo

FENS Forum 2024

ePosterNeuroscience

Does the perception of gravitational orientation, variations in the subject's position, influence binocular fusion?

Marc Janin, Noëlle Bru, Thierry Paillard

FENS Forum 2024

ePosterNeuroscience

Dynamic perception in volatile environments: How relevant is the past when predicting the future?

David Meijer, Roberto Barumerli, Robert Baumgartner

FENS Forum 2024

ePosterNeuroscience

Early cortical network deficits underlying abnormal stimulus perception in Shank3b+/- mice

Elena Montagni, Manuel Ambrosone, Alessandra Martello, Daniele M. Papetti, Daniela Besozzi, Lorenzo Curti, Laura Baroncelli, Alessio Masi, Guido Mannaioni, Francesco S. Pavone, Anna L. A. Mascaro

FENS Forum 2024

ePosterNeuroscience

The effects and interactions of top-down influences on speech perception

Reuben Chaudhuri, Ryszard Auksztulewicz, Ruofan Wu, Colin Blakemore, Jan Schnupp

FENS Forum 2024

ePosterNeuroscience

Electrophysiologic, transcriptomic, and morphologic plasticity of spinal inhibitory neurons to decipher atypical mechanosensory perception in Autism Spectrum Disorder

Anna Saint-Jean, Vanessa Rouglan, Florian Specque, Alexis Groppi, Macha Nikolski, Alexandre Favereaux, Yves Le Feuvre

FENS Forum 2024

ePosterNeuroscience

Fear-dependent brain state changes in perception and sensory representation in larvae zebrafish

Conrad Lee, Leandro A Scholz, Ethan K Scott

FENS Forum 2024

ePosterNeuroscience

fMRI mapping of brain circuits during simple sound perception by awake rats

Gabriele Russo, Denise Manahan-Vaughan

FENS Forum 2024

ePosterNeuroscience

Impact of Alzheimer’s disease on non-visual light perception, suprachiasmatic nucleus connectivity, and sleep regulation

Hugo Calligaro, Michael TY Lam, Brian Khov, Keun-Young Kim, Wonkyu K Ju, Mark H Ellisman, Satchidananda Panda

FENS Forum 2024

ePosterNeuroscience

Impact of musical experience on music perception in the elderly

Alexis Whittom, Isabelle Blanchette, Pascale Tremblay, Andréanne Sharp

FENS Forum 2024

ePosterNeuroscience

The impact of virtual reality on postoperative cognitive impairment and pain perception after surgery

Sebastian Isac, Andrada-Georgiana Badea, Ana-Maria Zagrean, Elisabeta Nita, Diana Irene Mihai, Damiana Ojog, Pavel Bogdan, Teodora Isac, Gabriela Droc

FENS Forum 2024

ePosterNeuroscience

Influence of expectations on pain perception: Evidence for predictive coding

Arthur S. Courtin, Kora Montemagno, Julia Czurylo, Melina Vejlø, Francesca Fardo, Micah Allen

FENS Forum 2024

ePosterNeuroscience

An EEG investigation for individual differences in time perception: Unraveling neural dynamics through serial dependency

Zahra Shirzhiyan, Stefan Glasauer

FENS Forum 2024

ePosterNeuroscience

Modulation of neuropathic pain and tactile perception in spinal cord injury during an exoskeleton training program

Erik Leemhuis, Maria Luisa De Martino, Angelica Scuderi, Sara Tranquilli, Anna Maria Giannini, Mariella Pazzaglia

FENS Forum 2024

ePosterNeuroscience

Motor arrest by stimulation of the pedunculopontine nucleus disrupts perception of visual cue in a visuospatial cue task

Madelaine Christine Adamsson Bonfils, Silas Dalum Larsen, Jakob F. Sørensen, Rune W. Berg

FENS Forum 2024

perception coverage

90 items

Seminar50
ePoster40
Domain spotlight

Explore how perception research is advancing inside Neuro.

Visit domain