← Back

Perceptual Processing

Topic spotlight
TopicWorld Wide

perceptual processing

Discover seminars, jobs, and research tagged with perceptual processing across World Wide.
7 curated items5 Seminars2 Positions
Updated 2 days ago
7 items · perceptual processing
7 results
Position

I-Chun Lin, PhD

Gatsby Computational Neuroscience Unit, UCL
Gatsby Computational Neuroscience Unit, UCL
Dec 5, 2025

The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning in brains and machines; inference, coding and neural dynamics; Bayesian and kernel methods, and deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The Unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical/computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment.

Position

I-Chun Lin

Gatsby Computational Neuroscience Unit, UCL
Gatsby Computational Neuroscience Unit, UCL
Dec 5, 2025

The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning in brains and machines; inference, coding and neural dynamics; Bayesian and kernel methods, and deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The Unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical/computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment.

SeminarNeuroscience

Decoding rapidly presented visual stimuli from prefrontal ensembles without report nor post-perceptual processing

Joachim Bellet
Mar 9, 2023
SeminarNeuroscienceRecording

Multisensory influences on vision: Sounds enhance and alter visual-perceptual processing

Viola Störmer
Dartmouth College
Nov 30, 2022

Visual perception is traditionally studied in isolation from other sensory systems, and while this approach has been exceptionally successful, in the real world, visual objects are often accompanied by sounds, smells, tactile information, or taste. How is visual processing influenced by these other sensory inputs? In this talk, I will review studies from our lab showing that a sound can influence the perception of a visual object in multiple ways. In the first part, I will focus on spatial interactions between sound and sight, demonstrating that co-localized sounds enhance visual perception. Then, I will show that these cross-modal interactions also occur at a higher contextual and semantic level, where naturalistic sounds facilitate the processing of real-world objects that match these sounds. Throughout my talk I will explore to what extent sounds not only improve visual processing but also alter perceptual representations of the objects we see. Most broadly, I will argue for the importance of considering multisensory influences on visual perception for a more complete understanding of our visual experience.

SeminarNeuroscienceRecording

Space and its computational challenges

Jennifer Groh
Duke University
Nov 17, 2021

How our senses work both separately and together involves rich computational problems. I will discuss the spatial and representational problems faced by the visual and auditory system, focusing on two issues. 1. How does the brain correct for discrepancies in the visual and auditory spatial reference frames? I will describe our recent discovery of a novel type of otoacoustic emission, the eye movement related eardrum oscillation, or EMREO (Gruters et al, PNAS 2018). 2. How does the brain encode more than one stimulus at a time? I will discuss evidence for neural time-division multiplexing, in which neural activity fluctuates across time to allow representations to encode more than one simultaneous stimulus (Caruso et al, Nat Comm 2018). These findings all emerged from experimentally testing computational models regarding spatial representations and their transformations within and across sensory pathways. Further, they speak to several general problems confronting modern neuroscience such as the hierarchical organization of brain pathways and limits on perceptual/cognitive processing.

SeminarPsychology

The diachronic account of attentional selectivity

Alon Zivony
Birbeck University of London
Oct 20, 2021

Many models of attention assume that attentional selection takes place at a specific moment in time which demarcates the critical transition from pre-attentive to attentive processing of sensory input. We argue that this intuitively appealing account is not only inaccurate, but has led to substantial conceptual confusion (to the point where some attention researchers offer to abandon the term ‘attention’ altogether). As an alternative, we offer a “diachronic” framework that describes attentional selectivity as a process that unfolds over time. Key to this view is the concept of attentional episodes, brief periods of intense attentional amplification of sensory representations that regulate access to working memory and response-related processes. We describe how attentional episodes are linked to earlier attentional mechanisms and to recurrent processing at the neural level. We present data showing that multiple sequential events can be involuntarily encoded in working memory when they appear during the same attentional episode, whether they are relevant or not. We also discuss the costs associated with processing multiple events within a single episode. Finally, we argue that breaking down the dichotomy between pre-attentive and attentive (as well as early vs. late selection) offers new solutions to old problems in attention research that have never been resolved. It can provide a unified and conceptually coherent account of the network of cognitive and neural processes that produce the goal-directed selectivity in perceptual processing that is commonly referred to as “attention”.