Visual Environment
visual environment
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Perception during visual disruptions
Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption. Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.
Perception during visual disruptions
Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption. Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.
What does time of day mean for vision?
Profound changes in the visual environment occur over the course of the day-night cycle. There is therefore a profound pressure for cells and circuits within the visual system to adjust their function over time, to match the prevailing visual environment. Here, I will discuss electrophysiological data collected from nocturnal and diurnal rodents that reveal how the visual code is ‘temporally optimised’ by 1) the retina’s circadian clock, and 2) a change in behavioural temporal niche.
Probabilistic computation in natural vision
A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.
Visual Decisions in Natural Action
Natural behavior reveals the way that gaze serves the needs of the current task, and the complex cognitive control mechanisms that are involved. It has become increasingly clear that even the simplest actions involve complex decision processes that depend on an interaction of visual information, knowledge of the current environment, and the intrinsic costs and benefits of actions choices. I will explore these ideas in the context of walking in natural terrain, where we are able to recover the 3D structure of the visual environment. We show that subjects choose flexible paths that depend on the flatness of the terrain over the next few steps. Subjects trade off flatness with straightness of their paths towards the goal, indicating a nuanced trade-off between stability and energetic costs on both the time scale of the next step and longer-range constraints.
Creating and controlling visual environments using BonVision
Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices.
Beyond visual search: studying visual attention with multitarget visual foraging tasks
Visual attention refers to a set of processes allowing selection of relevant and filtering out of irrelevant information in the visual environment. A large amount of research on visual attention has involved visual search paradigms, where observers are asked to report whether a single target is present or absent. However, recent studies have revealed that these classic single-target visual search tasks only provide a snapshot of how attention is allocated in the visual environment, and that multitarget visual foraging tasks may capture the dynamics visual attention more accurately. In visual foraging, observers are asked to select multiple instances of multiple target types, as fast as they can. A critical question in foraging research concerns the factors driving the next target selection. Most likely, this would require two steps: (1) identifying a set of candidates for the next selection, and (2) selecting the best option among these candidates. After having briefly described the advantage of visual foraging over visual search, I will review recent visual foraging studies testing the influence of several manipulations (e.g., target crypticity, number of items, selection modality) on foraging behaviour. Overall, these studies revealed that the next target selection during visual foraging is determined by the competition between three factors: target value, target proximity, and priming of features. I will explain how the analysis of individual differences in foraging behaviour can provide important information, with the idea that individuals show by-default internal biases toward value, proximity and priming that determine their search strategy and behaviour.
Natural visual stimuli for mice
During the course of evolution, a species’ environment shapes its sensory abilities, as individuals with more optimized sensory abilities are more likely survive and procreate. Adaptations to the statistics of the natural environment can be observed along the early visual pathway and across species. Therefore, characterising the properties of natural environments and studying the representation of natural scenes along the visual pathway is crucial for advancing our understanding of the structure and function of the visual system. In the past 20 years, mice have become an important model in vision research, but the fact that they live in a different environment than primates and have different visual needs is rarely considered. One particular challenge for characterising the mouse’s visual environment is that they are dichromats with photoreceptors that detect UV light, which the typical camera does not record. This also has consequences for experimental visual stimulation, as the blue channel of computer screens fails to excite mouse UV cone photoreceptors. In my talk, I will describe our approach to recording “colour” footage of the habitat of mice – from the mouse’s perspective – and to studying retinal circuits in the ex vivo retina with natural movies.
Neuronal discrimination of visual environments differentially depends on behavioural context in the hippocampus and neocortex
FENS Forum 2024