← Back

Visual Perception

Topic spotlight
TopicWorld Wide

visual perception

Discover seminars, jobs, and research tagged with visual perception across World Wide.
45 curated items35 Seminars9 Positions1 ePoster
Updated about 15 hours ago
45 items · visual perception
45 results
Position

Eugenio Piasini

International School for Advanced Studies (SISSA)
Trieste
Dec 5, 2025

Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department (https://www.sissa.it/research/neuroscience). The selection procedure is now open. The application deadline is 28 March 2024. To learn how to apply, please visit https://phdcns.sissa.it/admission-procedure . Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.

Position

Cognitive Neuroscience PhD group @ SISSA

International School for Advanced Studies (SISSA)
Trieste, Italy
Dec 5, 2025

SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience Department (https://phdcns.sissa.it/) hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. The Department is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 28 August 2023. To learn how to apply, please visit https://phdcns.sissa.it/admission-procedure . Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.

Position

Eugenio Piasini

International School for Advanced Studies (SISSA)
Trieste, Italy
Dec 5, 2025

SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. The Cognitive Neuroscience Department hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. The Department is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models.

PositionNeuroscience

Lyle Muller

Western University
Western University, London, Ontario
Dec 5, 2025

This position will involve collaboration between our laboratory and researchers with expertise in advanced methods of brain imaging (Mark Schnitzer, Stanford), neuroengineering (Duygu Kuzum, UCSD), theoretical neuroscience (Todd Coleman, Stanford), and neurophysiology of visual perception (John Reynolds, Salk Institute for Biological Studies). In collaboration with this multi-disciplinary team, this researcher will apply new signal processing techniques for multisite spatiotemporal data to understand cortical dynamics during visual perception. This project will also involve development of spiking network models to understand the mechanisms underlying observed activity patterns. The project may include intermittent travel between labs to present results and facilitate collaborative work.

Position

Eugenio Piasini

International School for Advanced Studies (SISSA)
Trieste, Italy
Dec 5, 2025

Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group hosts 7 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language and reading, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. This year, one of the PhD scholarships is set aside for joint PhD projects across PhD programs within the Neuroscience department.

Position

Mathew Diamond

SISSA
Trieste, Italy
Dec 5, 2025

Up to 2 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2024. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, visual perception, motivation and intelligence, language, tactile perception and learning, and neural computation. Our research is highly interdisciplinary; our approaches include behavioural, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline is 27 August 2024. Please apply here (https://www.sissa.it/bandi/ammissione-ai-corsi-di-philosophiae-doctor-posizioni-cofinanziate-dal-fondo-sociale-europeo), and see the admission procedure page (https://phdcns.sissa.it/admission-procedure) for more information. Note that the positions available for current admission round are those funded by the 'Fondo Sociale Europeo Plus', accessible through the first link above.

Position

Eugenio Piasini

International School for Advanced Studies (SISSA)
Trieste
Dec 5, 2025

Up to 6 PhD positions in Cognitive Neuroscience are available at SISSA, Trieste, starting October 2025. SISSA is an elite postgraduate research institution for Maths, Physics and Neuroscience, located in Trieste, Italy. SISSA operates in English, and its faculty and student community is diverse and strongly international. The Cognitive Neuroscience group (https://phdcns.sissa.it/) hosts 6 research labs that study the neuronal bases of time and magnitude processing, neuronal foundations of perceptual experience and learning in various sensory modalities, motivation and intelligence, language, and neural computation. Our research is highly interdisciplinary; our approaches include behavioral, psychophysics, and neurophysiological experiments with humans and animals, as well as computational, statistical and mathematical models. Students from a broad range of backgrounds (physics, maths, medicine, psychology, biology) are encouraged to apply. The selection procedure is now open. The application deadline for the spring admission round is 20 March 2025 at 1pm CET. Please apply here, and see the admission procedure page for more information. Please contact the PhD Coordinator Mathew Diamond (diamond@sissa.it) and/or your prospective supervisor for more information and informal inquiries.

PositionNeuroscience

Prof. Dr. Caspar Schwiedrzik

German Primate Center (DPZ) - Leibniz Institute for Primate Research
Göttingen, Germany
Dec 5, 2025

The Perception and Plasticity Group of Caspar Schwiedrzik at the DPZ is looking for an outstanding postdoc interested in studying the neural basis of high-dimensional category learning in vision. The project investigates neural mechanisms of category learning at the level of circuits and single cells, utilizing electrophysiology, functional magnetic resonance imaging, behavioral testing in humans and non-human primates, and computational modeling. It is funded by an ERC Consolidator Grant (Acronym DimLearn; “Flexible Dimensionality of Representational Spaces in Category Learning”). The postdoc’s project will focus on investigating the neural basis of visual category learning in macaque monkeys combining chronic multi-electrode electrophysiological recordings and electrical microstimulation. In addition, the postdoc will have the opportunity to cooperate with other lab members on parallel computational investigations using artificial neural networks as well as comparative research exploring the same questions in humans. The postdoc will play a key role in our research efforts in this area. The lab is located at Ruhr-University Bochum and the German Primate Center in Göttingen. At both locations, the lab is embedded into interdisciplinary research centers with international faculty and students pursuing cutting-edge research in cognitive and computational neuroscience. The main site for this part of the project will be Göttingen. The postdoc will have access to state-of-the-art electrophysiology, an imaging center with a dedicated 3T research scanner, and behavioral setups. The project will be conducted in close collaboration with the labs of Fabian Sinz, Alexander Gail, and Igor Kagan.

SeminarNeuroscienceRecording

Restoring Sight to the Blind: Effects of Structural and Functional Plasticity

Noelle Stiles
Rutgers University
May 21, 2025

Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.

SeminarNeuroscienceRecording

The hippocampus, visual perception and visual memory

Morris Moscovitch
University of Toronto
May 5, 2025
SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Paul Scotti
Dec 6, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812

SeminarPsychology

A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures

Julien Audiffren
University of Fribourg
Feb 28, 2023

The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.

SeminarNeuroscienceRecording

Visual Perception in Cerebral Visual Impairment (CVI)

Lotfi Merabet
Mass Eye and Ear, Harvard Medical School
Jan 18, 2023
SeminarNeuroscienceRecording

Multisensory influences on vision: Sounds enhance and alter visual-perceptual processing

Viola Störmer
Dartmouth College
Nov 30, 2022

Visual perception is traditionally studied in isolation from other sensory systems, and while this approach has been exceptionally successful, in the real world, visual objects are often accompanied by sounds, smells, tactile information, or taste. How is visual processing influenced by these other sensory inputs? In this talk, I will review studies from our lab showing that a sound can influence the perception of a visual object in multiple ways. In the first part, I will focus on spatial interactions between sound and sight, demonstrating that co-localized sounds enhance visual perception. Then, I will show that these cross-modal interactions also occur at a higher contextual and semantic level, where naturalistic sounds facilitate the processing of real-world objects that match these sounds. Throughout my talk I will explore to what extent sounds not only improve visual processing but also alter perceptual representations of the objects we see. Most broadly, I will argue for the importance of considering multisensory influences on visual perception for a more complete understanding of our visual experience.

SeminarPsychology

Disentangling neural correlates of consciousness and task relevance using EEG and fMRI

Torge Dellert
Westfälischen Wilhelms-Universität (WWU) Münster
Oct 11, 2022

How does our brain generate consciousness, that is, the subjective experience of what it is like to see face or hear a sound? Do we become aware of a stimulus during early sensory processing or only later when information is shared in a wide-spread fronto-parietal network? Neural correlates of consciousness are typically identified by comparing brain activity when a constant stimulus (e.g., a face) is perceived versus not perceived. However, in most previous experiments, conscious perception was systematically confounded with post-perceptual processes such as decision-making and report. In this talk, I will present recent EEG and fMRI studies dissociating neural correlates of consciousness and task-related processing in visual and auditory perception. Our results suggest that consciousness emerges during early sensory processing, while late, fronto-parietal activity is associated with post-perceptual processes rather than awareness. These findings challenge predominant theories of consciousness and highlight the importance of considering task relevance as a confound across different neuroscientific methods, experimental paradigms and sensory modalities.

SeminarNeuroscience

Binocular combination of light

Daniel H. Baker
University of York (USA)
Jul 13, 2022

The brain combines signals across the eyes. This process is well-characterized for the perceptual anatomical pathway through V1 that primarily codes contrast, where interocular normalization ensures that responses are approximately equal for monocular and binocular stimulation. But we have much less understanding of how luminance is combined binocularly, both in the cortex and in subcortical structures that govern pupil diameter. Here I will describe the results of experiments using a novel combined EEG and pupillometry paradigm to simultaneously index binocular combination of luminance flicker in parallel pathways. The results show evidence of a more linear process than for spatial contrast, that may reflect different operational constraints in distinct anatomical pathways.

SeminarNeuroscience

Perception during visual disruptions

Grace Edwards and Lina Teichmann
National Institute of Mental Health, Laboratory of Brain and Cognition, U.S. Department of Health and Human Services.
Jun 12, 2022

Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption. Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.

SeminarPsychology

Perception during visual disruptions

Grace Edwards & Lina Teichmann
NIH/NIMH, Laboratory of Brain & Cognition
Jun 12, 2022

Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption.   Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.

SeminarNeuroscienceRecording

Retinal responses to natural inputs

Fred Rieke
University of Washington
Apr 17, 2022

The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.

SeminarNeuroscience

From natural scene statistics to multisensory integration: experiments, models and applications

Cesare Parise
Oculus VR
Feb 8, 2022

To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.

SeminarNeuroscience

The self-consistent nature of visual perception

Alan Stocker
University of Pennsylvania
Dec 7, 2021

Vision provides us with a holistic interpretation of the world that is, with very few exceptions, coherent and consistent across multiple levels of abstraction, from scene to objects to features. In this talk I will present results from past and ongoing work in my laboratory that investigates the role top-down signals play in establishing such coherent perceptual experience. Based on the results of several psychophysical experiments I will introduce a theory of “self-consistent inference” and show how it can account for human perceptual behavior. The talk will close with a discussion of how the theory can help us understand more cognitive processes.

SeminarNeuroscience

Individual differences in visual (mis)perception: a multivariate statistical approach

Aline Cretenoud
Laboratory of Psychophysics, BMI, SV, EPFL
Dec 7, 2021

Common factors are omnipresent in everyday life, e.g., it is widely held that there is a common factor g for intelligence. In vision, however, there seems to be a multitude of specific factors rather than a strong and unique common factor. In my thesis, I first examined the multidimensionality of the structure underlying visual illusions. To this aim, the susceptibility to various visual illusions was measured. In addition, subjects were tested with variants of the same illusion, which differed in spatial features, luminance, orientation, or contextual conditions. Only weak correlations were observed between the susceptibility to different visual illusions. An individual showing a strong susceptibility to one visual illusion does not necessarily show a strong susceptibility to other visual illusions, suggesting that the structure underlying visual illusions is multifactorial. In contrast, there were strong correlations between the susceptibility to variants of the same illusion. Hence, factors seem to be illusion-specific but not feature-specific. Second, I investigated whether a strong visual factor emerges in healthy elderly and patients with schizophrenia, which may be expected from the general decline in perceptual abilities usually reported in these two populations compared to healthy young adults. Similarly, a strong visual factor may emerge in action video gamers, who often show enhanced perceptual performance compared to non-video gamers. Hence, healthy elderly, patients with schizophrenia, and action video gamers were tested with a battery of visual tasks, such as a contrast detection and orientation discrimination task. As in control groups, between-task correlations were weak in general, which argues against the emergence of a strong common factor for vision in these populations. While similar tasks are usually assumed to rely on similar neural mechanisms, the performances in different visual tasks were only weakly related to each other, i.e., performance does not generalize across visual tasks. These results highlight the relevance of an individual differences approach to unravel the multidimensionality of the visual structure.

SeminarNeuroscienceRecording

NMC4 Keynote:

Yuki Kamitani
Kyoto University and ATR
Dec 1, 2021

The brain represents the external world through the bottleneck of sensory organs. The network of hierarchically organized neurons is thought to recover the causes of sensory inputs to reconstruct the reality in the brain in idiosyncratic ways depending on individuals and their internal states. How can we understand the world model represented in an individual’s brain, or the neuroverse? My lab has been working on brain decoding of visual perception and subjective experiences such as imagery and dreaming using machine learning and deep neural network representations. In this talk, I will outline the progress of brain decoding methods and present how subjective experiences are externalized as images and how they could be shared across individuals via neural code conversion. The prospects of these approaches in basic science and neurotechnology will be discussed.

SeminarNeuroscienceRecording

Interactions between visual cortical neurons that give rise to conscious perception

Pieter Roelfsema
Netherlands Institute for Neuroscience
Oct 24, 2021

I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory and can be reported. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual brain. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.

SeminarNeuroscience

Demystifying the richness of visual perception

Ruth Rosenholtz
MIT
Oct 19, 2021

Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.

SeminarNeuroscience

What Art can tell us about the Brain

Margaret Livingstone
Harvard
Oct 4, 2021

Artists have been doing experiments on vision longer than neurobiologists. Some major works of art have provided insights as to how we see; some of these insights are so undamental that they can be understood in terms of the underlying neurobiology. For example, artists have long realized that color and luminance can play independent roles in visual perception. Picasso said, "Colors are only symbols. Reality is to be found in luminance alone." This observation has a parallel in the functional subdivision of our visual systems, where color and luminance are processed by the evolutionarily newer, primate-specific What system, and the older, colorblind, Where (or How) system. Many techniques developed over the centuries by artists can be understood in terms of the parallel organization of our visual systems. I will explore how the segregation of color and luminance processing are the basis for why some Impressionist paintings seem to shimmer, why some op art paintings seem to move, some principles of Matisse's use of color, and how the Impressionists painted "air". Central and peripheral vision are distinct, and I will show how the differences in resolution across our visual field make the Mona Lisa's smile elusive, and produce a dynamic illusion in Pointillist paintings, Chuck Close paintings, and photomosaics. I will explore how artists have figured out important features about how our brains extract relevant information about faces and objects, and I will discuss why learning disabilities may be associated with artistic talent.

SeminarNeuroscienceRecording

Seeing with technology: Exchanging the senses with sensory substitution and augmentation

Michael Proulx
University of Bath
Sep 29, 2021

What is perception? Our sensory modalities transmit information about the external world into electrochemical signals that somehow give rise to our conscious experience of our environment. Normally there is too much information to be processed in any given moment, and the mechanisms of attention focus the limited resources of the mind to some information at the expense of others. My research has advanced from first examining visual perception and attention to now examine how multisensory processing contributes to perception and cognition. There are fundamental constraints on how much information can be processed by the different senses on their own and in combination. Here I will explore information processing from the perspective of sensory substitution and augmentation, and how "seeing" with the ears and tongue can advance fundamental and translational research.

SeminarNeuroscience

Age-related changes in visual perception – decline or experience?

Karin Pilz
University of Groningen
Jun 29, 2021

In Europe, the number of people aged 65 and older is increasing dramatically, and research related to ageing is more crucial than ever. The main research dedicated to age-related changes concentrates on cognitive or sensory deficits. This is also the case in vision research. However, the majority of older adults ages without major cognitive or optical or deficits. These are foremost good news, but even in the absence of neurodegenerative or eye diseases changes in visual perception occur. It has been suggested that age-related changes are due to a general decline of cognitive, perceptual and sensory functions. However, more recent studies reveal large individual differences within the ageing population and whereas some functions show age-related deterioration, others are surprisingly unaffected. Overall, it becomes increasingly apparent that perceptual changes in healthy ageing cannot be attributed to one single underlying factor. I will present studies from various areas of visual perception that challenge the view that age-related changes are primarily related to decline. Instead, our findings suggest that age-related changes are the result of visual experience, such that the brain ages optimally given the input it receives.

SeminarNeuroscience

Smart perception?: Gestalt grouping, perceptual averaging, and memory capacity

Jennifer E. Corbett
Brunel University London
May 17, 2021

It seems we see the world in full detail. However, the eye is not a camera nor is the brain a computer. Incredible metabolic constraints render us unable to encode more than a fraction of information available in each glance. Instead, our illusion of stable and complete perception is accomplished by parsimonious representation relying on natural order inherent in the surrounding environment. I will begin by discussing previous behavioral work from our lab demonstrating one such strategy by which the visual system represents average properties of Gestalt-grouped sets of individual objects, warping individual object representations toward the Gestalt-defined mean. I will then discuss on-going work using a behavioral index of averaging Gestalt-grouped information established in our previous work in conjunction with an ERP-index of VSTM capacity (the CDA) to measure whether the Gestalt-grouping and perceptual averaging strategy acts to boost memory capacity above the classic “four-item” limit. Finally, I will outline our pre-registered study to determine whether this perceptual strategy is indeed engaged in a “smart” manner under normal circumstances, or compromises fidelity for capacity by perceptually-averaging in trials with only four items that could otherwise be individually represented.

SeminarNeuroscienceRecording

Interactions between neurons during visual perception and restoring them in blindness

Pieter Roelfsema
Netherlands Institute for Neuroscience
Mar 8, 2021

I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results also inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual cortex. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.

SeminarNeuroscience

The properties of large receptive fields as explanation of ensemble statistical representation: A population coding model

Igor Utochkin
National Research University Higher School of Economics
Feb 1, 2021

no

SeminarNeuroscience

What is serially-dependent perception good for?

Mauro Manassi
University of Aberdeen, UK
Jan 13, 2021

Perception can be strongly serially-dependent (i.e. biased toward previously seen stimuli). Recently, serial dependencies in perception were proposed as a mechanism for perceptual stability, increasing the apparent continuity of the complex environments we experience in everyday life. For example, stable scene perception can be actively achieved by the visual system through global serial dependencies, a special kind of serial dependence between summary statistical representations. Serial dependence occurs also between emotional expressions, but it is highly selective for the same identity. Overall, these results further support the notion of serial dependence as a global, highly specialized, and purposeful mechanism. However, serial dependence could also be a deleterious phenomenon in unnatural or unpredictable situations, such as visual search in radiological scans, biasing current judgments toward previous ones even when accurate and unbiased perception is needed. For example, observers make consistent perceptual errors when classifying a tumor- like shape on the current trial, seeing it as more similar to the shape presented on the previous trial. In a separate localization test, observers make consistent errors when reporting the perceived position of an objects on the current trial, mislocalizing it toward the position in the preceding trial. Taken together, these results show two opposite sides of serial dependence; it can be a beneficial mechanism which promotes perceptual stability, but at the same time a deleterious mechanism which impairs our percept when fine recognition is needed.

SeminarNeuroscienceRecording

Global visual salience of competing stimuli

Alex Hernandez-Garcia
Université de Montréal
Dec 9, 2020

Current computational models of visual salience accurately predict the distribution of fixations on isolated visual stimuli. It is not known, however, whether the global salience of a stimulus, that is its effectiveness in the competition for attention with other stimuli, is a function of the local salience or an independent measure. Further, do task and familiarity with the competing images influence eye movements? In this talk, I will present the analysis of a computational model of the global salience of natural images. We trained a machine learning algorithm to learn the direction of the first saccade of participants who freely observed pairs of images. The pairs balanced the combinations of new and already seen images, as well as task and task-free trials. The coefficients of the model provided a reliable measure of the likelihood of each image to attract the first fixation when seen next to another image, that is their global salience. For example, images of close-up faces and images containing humans were consistently looked first and were assigned higher global salience. Interestingly, we found that global salience cannot be explained by the feature-driven local salience of images, the influence of task and familiarity was rather small and we reproduced the previously reported left-sided bias. This computational model of global salience allows to analyse multiple other aspects of human visual perception of competing stimuli. In the talk, I will also present our latest results from analysing the saccadic reaction time as a function of the global salience of the pair of images.

SeminarNeuroscienceRecording

Exploring fine detail: The interplay of attention, oculomotor behavior and visual perception in the fovea

Martina Poletti
University of Rochester
Dec 8, 2020

Outside the foveola, visual acuity and other visual functions gradually deteriorate with increasing eccentricity. Humans compensate for these limitations by relying on a tight link between perception and action; rapid gaze shifts (saccades) occur 2-3 times every second, separating brief “fixation” intervals in which visual information is acquired and processed. During fixation, however, the eye is not immobile. Small eye movements incessantly shift the image on the retina even when the attended stimulus is already foveated, suggesting a much deeper coupling between visual functions and oculomotor activity. Thanks to a combination of techniques allowing for high-resolution recordings of eye position, retinal stabilization, and accurate gaze localization, we examined how attention and eye movements are controlled at this scale. We have shown that during fixation, visual exploration of fine spatial detail unfolds following visuomotor strategies similar to those occurring at a larger scale. This behavior compensates for non-homogenous visual capabilities within the foveola and is finely controlled by attention, which facilitates processing at selected foveal locations. Ultimately, the limits of high acuity vision are greatly influenced by the spatiotemporal modulations introduced by fixational eye movements. These findings reveal that, contrary to common intuition, placing a stimulus within the foveola is necessary but not sufficient for high visual acuity; fine spatial vision is the outcome of an orchestrated synergy of motor, cognitive, and attentional factors.

SeminarNeuroscience

Crowding and the Architecture of the Visual System

Adrien Doerig
Laboratory of Psychophysics, BMI, EPFL
Dec 1, 2020

Classically, vision is seen as a cascade of local, feedforward computations. This framework has been tremendously successful, inspiring a wide range of ground-breaking findings in neuroscience and computer vision. Recently, feedforward Convolutional Neural Networks (ffCNNs), inspired by this classic framework, have revolutionized computer vision and been adopted as tools in neuroscience. However, despite these successes, there is much more to vision. I will present our work using visual crowding and related psychophysical effects as probes into visual processes that go beyond the classic framework. In crowding, perception of a target deteriorates in clutter. We focus on global aspects of crowding, in which perception of a small target is strongly modulated by the global configuration of elements across the visual field. We show that models based on the classic framework, including ffCNNs, cannot explain these effects for principled reasons and identify recurrent grouping and segmentation as a key missing ingredient. Then, we show that capsule networks, a recent kind of deep learning architecture combining the power of ffCNNs with recurrent grouping and segmentation, naturally explain these effects. We provide psychophysical evidence that humans indeed use a similar recurrent grouping and segmentation strategy in global crowding effects. In crowding, visual elements interfere across space. To study how elements interfere over time, we use the Sequential Metacontrast psychophysical paradigm, in which perception of visual elements depends on elements presented hundreds of milliseconds later. We psychophysically characterize the temporal structure of this interference and propose a simple computational model. Our results support the idea that perception is a discrete process. Together, the results presented here provide stepping-stones towards a fuller understanding of the visual system by suggesting architectural changes needed for more human-like neural computations.

SeminarNeuroscienceRecording

A structuralist perspective on the neuronal basis of human visual perception

Rafi Malach
Weizmann Inst. of Science
Oct 19, 2020
SeminarNeuroscienceRecording

Visual perception and fixational eye movements: microsaccades, drift and tremor

Yasuto Tanaka
Paris Miki Inc. and Osaka University
Jul 6, 2020
SeminarNeuroscienceRecording

Motion vision in Drosophila: from single neuron computation to behaviour

Michael Reiser
Janelia Research Campus
May 19, 2020

How nervous systems control behaviour is the main question we seek to answer in neuroscience. Although visual systems have been a popular entry point into the brain, we don’t understand—in any deep sense—how visual perception guides navigation in flies (or any organism). I will present recent progress towards this goal from our lab. We are using anatomical insights from connectomics, genetic methods for labelling and manipulating identified cell types, neurophysiology, behaviour, and computational modeling to explain how the fly brain processes visual motion to regulate behaviour.

SeminarNeuroscienceRecording

Vision in dynamically changing environments

Marion Silies
Johannes Gutenberg-Universität Mainz, Germany
May 17, 2020

Many visual systems can process information in dynamically changing environments. In general, visual perception scales with changes in the visual stimulus, or contrast, irrespective of background illumination. This is achieved by adaptation. However, visual perception is challenged when adaptation is not fast enough to deal with sudden changes in overall illumination, for example when gaze follows a moving object from bright sunlight into a shaded area. We have recently shown that the visual system of the fly found a solution by propagating a corrective luminance-sensitive signal to higher processing stages. Using in vivo two-photon imaging and behavioural analyses we showed that distinct OFF-pathway inputs encode contrast and luminance. The luminance-sensitive pathway is particularly required when processing visual motion in contextual dim light, when pure contrast sensitivity underestimates the salience of a stimulus. Recent work in the lab has addressed the question how two visual pathways obtain such fundamentally different sensitivities, given common photoreceptor input. We are furthermore currently working out the network-based strategies by which luminance- and contrast-sensitive signals are combined to guide appropriate visual behaviour. Together, I will discuss the molecular, cellular, and circuit mechanisms that ensure contrast computation, and therefore robust vision, in fast changing visual scenes.

SeminarNeuroscienceRecording

Playing the piano with the cortex: role of neuronal ensembles and pattern completion in perception

Rafael Yuste
Columbia University
May 11, 2020

The design of neural circuits, with large numbers of neurons interconnected in vast networks, strongly suggest that they are specifically build to generate emergent functional properties (1). To explore this hypothesis, we have developed two-photon holographic methods to selective image and manipulate the activity of neuronal populations in 3D in vivo (2). Using them we find that groups of synchronous neurons (neuronal ensembles) dominate the evoked and spontaneous activity of mouse primary visual cortex (3). Ensembles can be optogenetically imprinted for several days and some of their neurons trigger the entire ensemble (4). By activating these pattern completion cells in ensembles involved in visual discrimination paradigms, we can bi-directionally alter behavioural choices (5). Our results demonstrate that ensembles are necessary and sufficient for visual perception and are consistent with the possibility that neuronal ensembles are the functional building blocks of cortical circuits. 1. R. Yuste, From the neuron doctrine to neural networks. Nat Rev Neurosci 16, 487-497 (2015). 2. L. Carrillo-Reid, W. Yang, J. E. Kang Miller, D. S. Peterka, R. Yuste, Imaging and Optically Manipulating Neuronal Ensembles. Annu Rev Biophys, 46: 271-293 (2017). 3. J. E. Miller, I. Ayzenshtat, L. Carrillo-Reid, R. Yuste, Visual stimuli recruit intrinsically generated cortical ensembles. Proceedings of the National Academy of Sciences of the United States of America 111, E4053-4061 (2014). 4. L. Carrillo-Reid, W. Yang, Y. Bando, D. S. Peterka, R. Yuste, Imprinting and recalling cortical ensembles. Science 353, 691-694 (2016). 5. L. Carrillo-Reid, S. Han, W. Yang, A. Akrouh, R. Yuste, (2019). Controlling visually-guided behaviour by holographic recalling of cortical ensembles. Cell 178, 447-457. DOI:https://doi.org/10.1016/j.cell.2019.05.045.

ePoster

Sound power modulates rat visual perception in a temporal frequency classification task

Mattia Zanzi, Francesco Rinaldi, Eugenio Piasini, Davide Zoccolan

FENS Forum 2024