Visual Inputs
visual inputs
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Direction-selective ganglion cells in primate retina: a subcortical substrate for reflexive gaze stabilization?
To maintain a stable and clear image of the world, our eyes reflexively follow the direction in which a visual scene is moving. Such gaze stabilization mechanisms reduce image blur as we move in the environment. In non-primate mammals, this behavior is initiated by ON-type direction-selective ganglion cells (ON-DSGCs), which detect the direction of image motion and transmit signals to brainstem nuclei that drive compensatory eye movements. However, ON-DSGCs have not yet been functionally identified in primates, raising the possibility that the visual inputs that drive this behavior instead arise in the cortex. In this talk, I will present molecular, morphological and functional evidence for identification of an ON-DSGC in macaque retina. The presence of ON-DSGCs highlights the need to examine the contribution of subcortical retinal mechanisms to normal and aberrant gaze stabilization in the developing and mature visual system. More generally, our findings demonstrate the power of a multimodal approach to study sparsely represented primate RGC types.
Probabilistic computation in natural vision
A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.
Keeping visual cortex in the back of your mind: From visual inputs to behavior and memory
Response of cortical networks to optogenetic stimulation: Experiment vs. theory
Optogenetics is a powerful tool that allows experimentalists to perturb neural circuits. What can we learn about a network from observing its response to perturbations? I will first describe the results of optogenetic activation of inhibitory neurons in mice cortex, and show that the results are consistent with inhibition stabilization. I will then move to experiments in which excitatory neurons are activated optogenetically, with or without visual inputs, in mice and monkeys. In some conditions, these experiments show a surprising result that the distribution of firing rates is not significantly changed by stimulation, even though firing rates of individual neurons are strongly modified. I will show in which conditions a network model of excitatory and inhibitory neurons can reproduce this feature.
Categories, language, and visual working memory: how verbal labels change capacity limitations
The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.
Visual working memory representations are distorted by their use in perceptual comparisons
Visual working memory (VWM) allows us to maintain a small amount of task-relevant information in mind so that we can use them to guide our behavior. Although past studies have successfully characterized its capacity limit and representational quality during maintenance, the consequence of its usage for task-relevant behaviors has been largely unknown. In this talk, I will demonstrate that VWM representations get distorted when they are used for perceptual comparisons with new visual inputs, especially when the inputs are subjectively similar to the VWM representations. Furthermore, I will show that this similarity-induced memory bias (SIMB) occurs for both simple (e.g. , color, shape) and complex stimuli (e.g., real world objects, faces) that are perceptually encoded and retrieved from long-term memory. Given the observed versatility of the SIMB, its implication for other memory distortion phenomena (e.g., distractor-induced distortion, misinformation effect) will be discussed.
Networks for multi-sensory attention and working memory
Converging evidence from fMRI and EEG shows that audtiory spatial attention engages the same fronto-parietal network associated with visuo-spatial attention. This network is distinct from an auditory-biased processing network that includes other frontal regions; this second network is can be recruited when observers extract rhythmic information from visual inputs. We recently used a dual-task paradigm to examine whether this "division of labor" between a visuo-spatial network and an auditory-rhythmic network can be observed in a working memory paradigm. We varied the sensory modality (visual vs. auditory) and information domain (spatial or rhythmic) that observers had to store in working memory, while also performing an intervening task. Behavior, pupilometry, and EEG results show a complex interaction across the working memory and intervening tasks, consistent with two cognitive control networks managing auditory and visual inputs based on the kind of information being processed.
Arousal modulates retinal output
Neural responses in the visual system are usually not purely visual but depend on behavioural and internal states such as arousal. This dependence is seen both in primary visual cortex (V1) and in subcortical brain structures receiving direct retinal input. In this talk, I will show that modulation by behavioural state arises as early as in the output of the retina.To measure retinal activity in the awake, intact brain, we imaged the synaptic boutons of retinal axons in the superficial superior colliculus (sSC) of mice. The activity of about half of the boutons depended not only on vision but also on running speed and pupil size, regardless of retinal illumination. Arousal typically reduced the boutons’ visual responses to preferred direction and their selectivity for direction and orientation.Arousal may affect activity in retinal boutons by presynaptic neuromodulation. To test whether the effects of arousal occur already in the retina, we recorded from retinal axons in the optic tract. We found that, in darkness, more than one third of the recorded axons was significantly correlated with running speed. Arousal had similar effects postsynaptically, in sSC neurons, independent of activity in V1, the other main source of visual inputs to colliculus. Optogenetic inactivation of V1 generally decreased activity in collicular neurons but did not diminish the effects of arousal. These results indicate that arousal modulates activity at every stage of the visual system. In the future, we will study the purpose and the underlying mechanisms of behavioural modulation in the early visual system
Coregistration of heading and visual inputs in retrosplenial cortex
COSYNE 2023