TopicNeuroscience
Content Overview
16Total items
11Seminars
5ePosters

Latest

SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 12, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 29, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscienceRecording

Geometry of concept learning

Haim Sompolinsky
The Hebrew University of Jerusalem and Harvard University
Jan 4, 2023

Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.

SeminarNeuroscience

Towards a neurally mechanistic understanding of visual cognition

Kohitij Kar
Massachusetts Institute of Technology
Jun 14, 2021

I am interested in developing a neurally mechanistic understanding of how primate brains represent the world through its visual system and how such representations enable a remarkable set of intelligent behaviors. In this talk, I will primarily highlight aspects of my current research that focuses on dissecting the brain circuits that support core object recognition behavior (primates’ ability to categorize objects within hundreds of milliseconds) in non-human primates. On the one hand, my work empirically examines how well computational models of the primate ventral visual pathways embed knowledge of the visual brain function (e.g., Bashivan*, Kar*, DiCarlo, Science, 2019). On the other hand, my work has led to various functional and architectural insights that help improve such brain models. For instance, we have exposed the necessity of recurrent computations in primate core object recognition (Kar et al., Nature Neuroscience, 2019), one that is strikingly missing from most feedforward artificial neural network models. Specifically, we have observed that the primate ventral stream requires fast recurrent processing via ventrolateral PFC for robust core object recognition (Kar and DiCarlo, Neuron, 2021). In addition, I have been currently developing various chemogenetic strategies to causally target specific bidirectional neural circuits in the macaque brain during multiple object recognition tasks to further probe their relevance during this behavior. I plan to transform these data and insights into tangible progress in neuroscience via my collaboration with various computational groups and building improved brain models of object recognition. I hope to end the talk with a brief glimpse of some of my planned future work!

SeminarNeuroscienceRecording

Vision outside of the visual system (in Drosophila)

Michael Reiser
Janelia Research Campus, HHMI
May 24, 2021

We seek to understand the control of behavior – by animals, their brains, and their neurons. Reiser and his team are focused on the fly visual system, using modern methods from the Drosophila toolkit to understand how visual pathways are involved in specific behaviors. Due to the recent connectomics explosion, they now study the brain-wide networks organizing visual information for behavior control. The team combines explorations of visually guided behaviors with functional investigations of specific cell types throughout the fly brain. The Reiser lab actively develops and disseminates new methods and instruments enabling increasingly precise quantification of animal behavior.

SeminarNeuroscienceRecording

Neuronal variability and spatiotemporal dynamics in cortical network models

Chengcheng Huang
University of Pittsburgh
May 19, 2021

Neuronal variability is a reflection of recurrent circuitry and cellular physiology. The modulation of neuronal variability is a reliable signature of cognitive and processing state. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional with all neurons fluctuating en masse. We show that the spatiotemporal dynamics in a spatially structured network produce large population-wide shared variability. When the spatial and temporal scales of inhibitory coupling match known physiology, model spiking neurons naturally generate low dimensional shared variability that captures in vivo population recordings along the visual pathway. Further, we show that firing rate models with spatial coupling can also generate chaotic and low-dimensional rate dynamics. The chaotic parameter region expands when the network is driven by correlated noisy inputs, while being insensitive to the intensity of independent noise.

SeminarNeuroscienceRecording

Synapse-specific direction selectivity in retinal bipolar cell axon terminals

Keisuke Yonehara
Aarhus University
Nov 16, 2020

The ability to encode the direction of image motion is fundamental to our sense of vision. Direction selectivity along the four cardinal directions is thought to originate in direction-selective ganglion cells (DSGCs), due to directionally-tuned GABAergic suppression by starburst cells. Here, by utilizing two-photon glutamate imaging to measure synaptic release, we reveal that direction selectivity along all four directions arises earlier than expected, at bipolar cell outputs. Thus, DSGCs receive directionally-aligned glutamatergic inputs from bipolar cell boutons. We further show that this bouton-specific tuning relies on cholinergic excitation and GABAergic inhibition from starburst cells. In this way, starburst cells are able to refine directional tuning in the excitatory visual pathway by modulating the activity of DSGC dendrites and their axonal inputs using two different neurotransmitters.

SeminarNeuroscience

A new computational framework for understanding vision in our brain

Zhaoping Li
University of Tuebingen and Max Planck Institute
Jul 19, 2020

Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which creates a bottom-up saliency map to guide the fovea to selected visual locations via gaze shifts. This motivates a new framework that views vision as consisting of encoding, selection, and decoding stages, placing selection on center stage. It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better decoding (recognition), through analysis-by- synthesis, should query for additional information and be mainly directed at the foveal region. Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions.

SeminarNeuroscienceRecording

Natural visual stimuli for mice

Thomas Euler
University of Tubingen
Jul 17, 2020

During the course of evolution, a species’ environment shapes its sensory abilities, as individuals with more optimized sensory abilities are more likely survive and procreate. Adaptations to the statistics of the natural environment can be observed along the early visual pathway and across species. Therefore, characterising the properties of natural environments and studying the representation of natural scenes along the visual pathway is crucial for advancing our understanding of the structure and function of the visual system. In the past 20 years, mice have become an important model in vision research, but the fact that they live in a different environment than primates and have different visual needs is rarely considered. One particular challenge for characterising the mouse’s visual environment is that they are dichromats with photoreceptors that detect UV light, which the typical camera does not record. This also has consequences for experimental visual stimulation, as the blue channel of computer screens fails to excite mouse UV cone photoreceptors. In my talk, I will describe our approach to recording “colour” footage of the habitat of mice – from the mouse’s perspective – and to studying retinal circuits in the ex vivo retina with natural movies.

SeminarNeuroscienceRecording

Electrophysiology application for optic nerve and the central nervous system diseases

Dorota Pojda-Wilczek
Medical University of Silesia
May 25, 2020

Electrophysiology of eye and visual pathway is useful tool in ophthalmology and neurology. It covers a few examinations to find out if defect of vision is peripheral or central. Visual evoked potentials (VEP) are most frequently used in neurology and neuroophthalmology. VEP are evoked by flash or pattern stimulations. The combination of these both examinations gives more information about the visual pathway. It is very important to remember that VEP originate in the retina and reflect its function as well. In many cases not only VEP but also electroretinography (ERG) is essential for diagnosis. The seminar presents basic electrophysiological procedures used for diagnosis and follow-up of optic neuropathies and some of central nervous system diseases which affect vision (mostly multiple sclerosis, CNS tumors, stroke, traumas, intracranial hypertension).

SeminarNeuroscienceRecording

Vision in dynamically changing environments

Marion Silies
Johannes Gutenberg-Universität Mainz, Germany
May 18, 2020

Many visual systems can process information in dynamically changing environments. In general, visual perception scales with changes in the visual stimulus, or contrast, irrespective of background illumination. This is achieved by adaptation. However, visual perception is challenged when adaptation is not fast enough to deal with sudden changes in overall illumination, for example when gaze follows a moving object from bright sunlight into a shaded area. We have recently shown that the visual system of the fly found a solution by propagating a corrective luminance-sensitive signal to higher processing stages. Using in vivo two-photon imaging and behavioural analyses we showed that distinct OFF-pathway inputs encode contrast and luminance. The luminance-sensitive pathway is particularly required when processing visual motion in contextual dim light, when pure contrast sensitivity underestimates the salience of a stimulus. Recent work in the lab has addressed the question how two visual pathways obtain such fundamentally different sensitivities, given common photoreceptor input. We are furthermore currently working out the network-based strategies by which luminance- and contrast-sensitive signals are combined to guide appropriate visual behaviour. Together, I will discuss the molecular, cellular, and circuit mechanisms that ensure contrast computation, and therefore robust vision, in fast changing visual scenes.

ePosterNeuroscience

Contrast-invariant orientation selectivity in a synthetic biology model of the early visual pathway

Julian Vogel, Jonas Franz, Manuel Schottdorf, Shy Shoham, Walter Stühmer, Fred Wolf

Bernstein Conference 2024

ePosterNeuroscience

Changes in tuning curves, not neural population covariance, improve category separability in the primate ventral visual pathway

Jenelle Feather, Long Sha, Gouki Okazawa, Nga Yu Lo, SueYeon Chung, Roozbeh Kiani

COSYNE 2025

ePosterNeuroscience

Development of an AAV-based model of tauopathy targeting retinal ganglion cells and the mouse visual pathway to study the role of microglia in Tau spreading

Pauline Léal, Charlotte Duwat, Gwennaelle Aurégan, Charlène Joséphine, Marie-Claude Gaillard, Caroline Jan, Anne-Sophie Hérard, Emmanuel Brouillet, Philippe Hantraye, Gilles Bonvento, Karine Cambon, Alexis Bemelmans
ePosterNeuroscience

A visual pathway for vocal learning in a songbird species

Manon Rolland, Catherine Del Negro, Nicolas Giret
ePosterNeuroscience

Contrast-invariant orientation selectivity in a synthetic biology model of the early visual pathway

Julian Vogel, Jonas Franz, Shy Shoham, Manuel Schottdorf, Fred Wolf

FENS Forum 2024

visual pathway coverage

16 items

Seminar11
ePoster5

Share your knowledge

Know something about visual pathway? Help the community by contributing seminars, talks, or research.

Contribute content
Domain spotlight

Explore how visual pathway research is advancing inside Neuroscience.

Visit domain

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.