Primary Visual Cortex
primary visual cortex
Dr. Ján Antolík
The CSNG Lab at the Faculty of Mathematics and Physics at the Charles University is seeking a highly motivated Postdoctoral Researcher to join our team to work on a digital twin model of the visual system. Funded by the JUNIOR Post-Doc Fund, this position offers an exciting opportunity to conduct cutting-edge research at the intersection of systems neuroscience, computational modeling, and AI. The project involves developing novel modular, multi-layer recurrent neural network (RNN) architectures that directly mirror the architecture of the primary visual cortex. Our models will establish a one-to-one mapping between individual neurons at different stages of the visual pathway and their artificial counterparts. They will explicitly incorporate functionally specific lateral recurrent interactions, excitatory and inhibitory neuronal classes, complex single-neuron transfer functions with adaptive mechanisms, synaptic depression, and others. We will first train our new RNNs on synthetic data generated by a state-of-the-art biologically realistic recurrent spiking model of the primary visual cortex developed in our group. After establishing the proof-of-concept on the synthetic data, we will translate our models to publicly available mouse and macaque data, as well as additional data from our experimental collaborators.
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Neural Mechanisms of Subsecond Temporal Encoding in Primary Visual Cortex
Subsecond timing underlies nearly all sensory and motor activities across species and is critical to survival. While subsecond temporal information has been found across cortical and subcortical regions, it is unclear if it is generated locally and intrinsically or if it is a read out of a centralized clock-like mechanism. Indeed, mechanisms of subsecond timing at the circuit level are largely obscure. Primary sensory areas are well-suited to address these question as they have early access to sensory information and provide minimal processing to it: if temporal information is found in these regions, it is likely to be generated intrinsically and locally. We test this hypothesis by training mice to perform an audio-visual temporal pattern sensory discrimination task as we use 2-photon calcium imaging, a technique capable of recording population level activity at single cell resolution, to record activity in primary visual cortex (V1). We have found significant changes in network dynamics through mice’s learning of the task from naive to middle to expert levels. Changes in network dynamics and behavioral performance are well accounted for by an intrinsic model of timing in which the trajectory of q network through high dimensional state space represents temporal sensory information. Conversely, while we found evidence of other temporal encoding models, such as oscillatory activity, we did not find that they accounted for increased performance but were in fact correlated with the intrinsic model itself. These results provide insight into how subsecond temporal information is encoded mechanistically at the circuit level.
Bio-realistic multiscale modeling of cortical circuits
A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.
A specialized role for entorhinal attractor dynamics in combining path integration and landmarks during navigation
During navigation, animals estimate their position using path integration and landmarks. In a series of two studies, we used virtual reality and electrophysiology to dissect how these inputs combine to generate the brain’s spatial representations. In the first study (Campbell et al., 2018), we focused on the medial entorhinal cortex (MEC) and its set of navigationally-relevant cell types, including grid cells, border cells, and speed cells. We discovered that attractor dynamics could explain an array of initially puzzling MEC responses to virtual reality manipulations. This theoretical framework successfully predicted both MEC grid cell responses to additional virtual reality manipulations, as well as mouse behavior in a virtual path integration task. In the second study (Campbell*, Attinger* et al., 2021), we asked whether these principles generalize to other navigationally-relevant brain regions. We used Neuropixels probes to record thousands of neurons from MEC, primary visual cortex (V1), and retrosplenial cortex (RSC). In contrast to the prevailing view that “everything is everywhere all at once,” we identified a unique population of MEC neurons, overlapping with grid cells, that became active with striking spatial periodicity while head-fixed mice ran on a treadmill in darkness. These neurons exhibited unique cue-integration properties compared to other MEC, V1, or RSC neurons: they remapped more readily in response to conflicts between path integration and landmarks; they coded position prospectively as opposed to retrospectively; they upweighted path integration relative to landmarks in conditions of low visual contrast; and as a population, they exhibited a lower-dimensional activity structure. Based on these results, our current view is that MEC attractor dynamics play a privileged role in resolving conflicts between path integration and landmarks during navigation. Future work should include carefully designed causal manipulations to rigorously test this idea, and expand the theoretical framework to incorporate notions of uncertainty and optimality.
Orientation selectivity in rodent V1: theory vs experiments
Neurons in the primary visual cortex (V1) of rodents are selective to the orientation of the stimulus, as in other mammals such as cats and monkeys. However, in contrast with those species, their neurons display a very different type of spatial organization. Instead of orientation maps they are organized in a “salt and pepper” pattern, where adjacent neurons have completely different preferred orientations. This structure has motivated both experimental and theoretical research with the objective of determining which aspects of the connectivity patterns and intrinsic neuronal responses can explain the observed behavior. These analysis have to take into account also that the neurons of the thalamus that send their outputs to the cortex have more complex responses in rodents than in higher mammals, displaying, for instance, a significant degree of orientation selectivity. In this talk we present work showing that a random feed-forward connectivity pattern, in which the probability of having a connection between a cortical neuron and a thalamic neuron depends only on the relative distance between them is enough explain several aspects of the complex phenomenology found in these systems. Moreover, this approach allows us to evaluate analytically the statistical structure of the thalamic input on the cortex. We find that V1 neurons are orientation selective but the preferred orientation of the stimulus depends on the spatial frequency of the stimulus. We disentangle the effect of the non circular thalamic receptive fields, finding that they control the selectivity of the time-averaged thalamic input, but not the selectivity of the time locked component. We also compare with experiments that use reverse correlation techniques, showing that ON and OFF components of the aggregate thalamic input are spatially segregated in the cortex.
Restructuring cortical feedback circuits
We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The mechanistic basis for this fundamental perceptual phenomenon has, however, remained obscure. What enables neurons in the visual system to respond to context when the stimulus is not available? While feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input. These signals are likely used to predict input from the feedforward pathway. Preliminary results show that these feedback projections are strongly influenced by experience and learning.
Chandelier cells shine a light on the emergence of GABAergic circuits in the cortex
GABAergic interneurons are chiefly responsible for controlling the activity of local circuits in the cortex. Chandelier cells (ChCs) are a type of GABAergic interneuron that control the output of hundreds of neighbouring pyramidal cells through axo-axonic synapses which target the axon initial segment (AIS). Despite their importance in modulating circuit activity, our knowledge of the development and function of axo-axonic synapses remains elusive. We have investigated the emergence and plasticity of axo-axonic synapses in layer 2/3 of the somatosensory cortex (S1) and found that ChCs follow what appear to be homeostatic rules when forming synapses with pyramidal neurons. We are currently implementing in vivo techniques to image the process of axo-axonic synapse formation during development and uncover the dynamics of synaptogenesis and pruning at the AIS. In addition, we are using an all-optical approach to both activate and measure the activity of chandelier cells and their postsynaptic partners in the primary visual cortex (V1) and somatosensory cortex (S1) in mice, also during development. We aim to provide a structural and functional description of the emergence and plasticity of a GABAergic synapse type in the cortex.
Hierarchical transformation of visual event timing representations in the human brain: response dynamics in early visual cortex and timing-tuned responses in association cortices
Quantifying the timing (duration and frequency) of brief visual events is vital to human perception, multisensory integration and action planning. For example, this allows us to follow and interact with the precise timing of speech and sports. Here we investigate how visual event timing is represented and transformed across the brain’s hierarchy: from sensory processing areas, through multisensory integration areas, to frontal action planning areas. We hypothesized that the dynamics of neural responses to sensory events in sensory processing areas allows derivation of event timing representations. This would allow higher-level processes such as multisensory integration and action planning to use sensory timing information, without the need for specialized central pacemakers or processes. Using 7T fMRI and neural model-based analyses, we found responses that monotonically increase in amplitude with visual event duration and frequency, becoming increasingly clear from primary visual cortex to lateral occipital visual field maps. Beginning in area MT/V5, we found a gradual transition from monotonic to tuned responses, with response amplitudes peaking at different event timings in different recording sites. While monotonic response components were limited to the retinotopic location of the visual stimulus, timing-tuned response components were independent of the recording sites' preferred visual field positions. These tuned responses formed a network of topographically organized timing maps in superior parietal, postcentral and frontal areas. From anterior to posterior timing maps, multiple events were increasingly integrated, response selectivity narrowed, and responses focused increasingly on the middle of the presented timing range. These results suggest that responses to event timing are transformed from the human brain’s sensory areas to the association cortices, with the event’s temporal properties being increasingly abstracted from the response dynamics and locations of early sensory processing. The resulting abstracted representation of event timing is then propagated through areas implicated in multisensory integration and action planning.
On the contributions of retinal direction selectivity to cortical motion processing in mice
Cells preferentially responding to visual motion in a particular direction are said to be direction-selective, and these were first identified in the primary visual cortex. Since then, direction-selective responses have been observed in the retina of several species, including mice, indicating motion analysis begins at the earliest stage of the visual hierarchy. Yet little is known about how retinal direction selectivity contributes to motion processing in the visual cortex. In this talk, I will present our experimental efforts to narrow this gap in our knowledge. To this end, we used genetic approaches to disrupt direction selectivity in the retina and mapped neuronal responses to visual motion in the visual cortex of mice using intrinsic signal optical imaging and two-photon calcium imaging. In essence, our work demonstrates that direction selectivity computed at the level of the retina causally serves to establish specialized motion responses in distinct areas of the mouse visual cortex. This finding thus compels us to revisit our notions of how the brain builds complex visual representations and underscores the importance of the processing performed in the periphery of sensory systems.
Re-vision: inspirations from the early attentional selection by the primary visual cortex
Feedback controls what we see
We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The visual system uses visual context to predict the content of the stimulus. What enables neurons in the visual system to respond to context when the stimulus is not available? In cortex, sensory processing is based on a combination of feedforward information arriving from sensory organs, and feedback information that originates in higher-order areas. Whereas feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the exciting discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas, in particular when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input.
Synthetic and natural images unlock the power of recurrency in primary visual cortex
During perception the visual system integrates current sensory evidence with previously acquired knowledge of the visual world. Presumably this computation relies on internal recurrent interactions. We record populations of neurons from the primary visual cortex of cats and macaque monkeys and find evidence for adaptive internal responses to structured stimulation that change on both slow and fast timescales. In the first experiment, we present abstract images, only briefly, a protocol known to produce strong and persistent recurrent responses in the primary visual cortex. We show that repetitive presentations of a large randomized set of images leads to enhanced stimulus encoding on a timescale of minutes to hours. The enhanced encoding preserves the representational details required for image reconstruction and can be detected in post-exposure spontaneous activity. In a second experiment, we show that the encoding of natural scenes across populations of V1 neurons is improved, over a timescale of hundreds of milliseconds, with the allocation of spatial attention. Given the hierarchical organization of the visual cortex, contextual information from the higher levels of the processing hierarchy, reflecting high-level image regularities, can inform the activity in V1 through feedback. We hypothesize that these fast attentional boosts in stimulus encoding rely on recurrent computations that capitalize on the presence of high-level visual features in natural scenes. We design control images dominated by low-level features and show that, in agreement with our hypothesis, the attentional benefits in stimulus encoding vanish. We conclude that, in the visual system, powerful recurrent processes optimize neuronal responses, already at the earliest stages of cortical processing.
A transcriptomic axis predicts state modulation of cortical interneurons
Transcriptomics has revealed that cortical inhibitory neurons exhibit a great diversity of fine molecular subtypes, but it is not known whether these subtypes have correspondingly diverse activity patterns in the living brain. We show that inhibitory subtypes in primary visual cortex (V1) have diverse correlates with brain state, but that this diversity is organized by a single factor: position along their main axis of transcriptomic variation. We combined in vivo 2-photon calcium imaging of mouse V1 with a novel transcriptomic method to identify mRNAs for 72 selected genes in ex vivo slices. We classified inhibitory neurons imaged in layers 1-3 into a three-level hierarchy of 5 Subclasses, 11 Types, and 35 Subtypes using previously-defined transcriptomic clusters. Responses to visual stimuli differed significantly only across Subclasses, suppressing cells in the Sncg Subclass while driving cells in the other Subclasses. Modulation by brain state differed at all hierarchical levels but could be largely predicted from the first transcriptomic principal component, which also predicted correlations with simultaneously recorded cells. Inhibitory Subtypes that fired more in resting, oscillatory brain states have less axon in layer 1, narrower spikes, lower input resistance and weaker adaptation as determined in vitro and express more inhibitory cholinergic receptors. Subtypes firing more during arousal had the opposite properties. Thus, a simple principle may largely explain how diverse inhibitory V1 Subtypes shape state-dependent cortical processing.
Probabilistic computation in natural vision
A central goal of vision science is to understand the principles underlying the perception and neural coding of the complex visual environment of our everyday experience. In the visual cortex, foundational work with artificial stimuli, and more recent work combining natural images and deep convolutional neural networks, have revealed much about the tuning of cortical neurons to specific image features. However, a major limitation of this existing work is its focus on single-neuron response strength to isolated images. First, during natural vision, the inputs to cortical neurons are not isolated but rather embedded in a rich spatial and temporal context. Second, the full structure of population activity—including the substantial trial-to-trial variability that is shared among neurons—determines encoded information and, ultimately, perception. In the first part of this talk, I will argue for a normative approach to study encoding of natural images in primary visual cortex (V1), which combines a detailed understanding of the sensory inputs with a theory of how those inputs should be represented. Specifically, we hypothesize that V1 response structure serves to approximate a probabilistic representation optimized to the statistics of natural visual inputs, and that contextual modulation is an integral aspect of achieving this goal. I will present a concrete computational framework that instantiates this hypothesis, and data recorded using multielectrode arrays in macaque V1 to test its predictions. In the second part, I will discuss how we are leveraging this framework to develop deep probabilistic algorithms for natural image and video segmentation.
Visual and cross-modal plasticity in adult humans
Neuroplasticity is a fundamental property of the nervous system that is maximal early in life, within a specific temporal window called critical period. However, it is still unclear to which extent the plastic potential of the visual cortex is retained in adulthood. We have surprisingly revealed residual ocular dominance plasticity in adult humans by showing that short-term monocular deprivation unexpectedly boosts the deprived eye (both at the perceptual and at the neural level), reflecting homeostatic plasticity. This effect is accompanied by a decrease of GABAergic inhibition in the primary visual cortex and can be modulated by non-visual factors (motor activity and motor plasticity). Finally, we have found that cross-modal plasticity is preserved in adult normal-sighted humans, as short-term monocular deprivation can alter early visuo-tactile interactions. Taken together, these results challenge the classical view of a hard-wired adult visual cortex, indicating that homeostatic plasticity can be reactivated in adult humans.
What does the primary visual cortex tell us about object recognition?
Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.
Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex
During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.
Role of primary visual cortex (V1) in visual awareness: insights from blindsight
- CANCELLED -
A recent formulation of predictive coding theory proposes that a subset of neurons in each cortical area encodes sensory prediction errors, the difference between predictions relayed from higher cortex and the sensory input. Here, we test for evidence of prediction error responses in spiking responses and local field potentials (LFP) recorded in primary visual cortex and area V4 of macaque monkeys, and in complementary electroencephalographic (EEG) scalp recordings in human participants. We presented a fixed sequence of visual stimuli on most trials, and violated the expected ordering on a small subset of trials. Under predictive coding theory, pattern-violating stimuli should trigger robust prediction errors, but we found that spiking, LFP and EEG responses to expected and pattern-violating stimuli were nearly identical. Our results challenge the assertion that a fundamental computational motif in sensory cortex is to signal prediction errors, at least those based on predictions derived from temporal patterns of visual stimulation.
Learning from unexpected events in the neocortical microcircuit
Predictive learning hypotheses posit that the neocortex learns a hierarchical model of the structure of features in the environment. Under these hypotheses, expected or predictable features are differentiated from unexpected ones by comparing bottom-up and top-down streams of data, with unexpected features then driving changes in the representation of incoming stimuli. This is supported by numerous studies in early sensory cortices showing that pyramidal neurons respond particularly strongly to unexpected stimulus events. However, it remains unknown how their responses govern subsequent changes in stimulus representations, and thus, govern learning. Here, I present results from our study of layer 2/3 and layer 5 pyramidal neurons imaged in primary visual cortex of awake, behaving mice using two-photon calcium microscopy at both the somatic and distal apical planes. Our data reveals that individual neurons and distal apical dendrites show distinct, but predictable changes in unexpected event responses when tracked over several days. Considering existing evidence that bottom-up information is primarily targeted to somata, with distal apical dendrites receiving the bulk of top-down inputs, our findings corroborate hypothesized complementary roles for these two neuronal compartments in hierarchical computing. Altogether, our work provides novel evidence that the neocortex indeed instantiates a predictive hierarchical model in which unexpected events drive learning.
Understanding the role of prediction in sensory encoding
At any given moment the brain receives more sensory information than it can use to guide adaptive behaviour, creating the need for mechanisms that promote efficient processing of incoming sensory signals. One way in which the brain might reduce its sensory processing load is to encode successive presentations of the same stimulus in a more efficient form, a process known as neural adaptation. Conversely, when a stimulus violates an expected pattern, it should evoke an enhanced neural response. Such a scheme for sensory encoding has been formalised in predictive coding theories, which propose that recent experience establishes expectations in the brain that generate prediction errors when violated. In this webinar, Professor Jason Mattingley will discuss whether the encoding of elementary visual features is modulated when otherwise identical stimuli are expected or unexpected based upon the history of stimulus presentation. In humans, EEG was employed to measure neural activity evoked by gratings of different orientations, and multivariate forward modelling was used to determine how orientation selectivity is affected for expected versus unexpected stimuli. In mice, two-photon calcium imaging was used to quantify orientation tuning of individual neurons in the primary visual cortex to expected and unexpected gratings. Results revealed enhanced orientation tuning to unexpected visual stimuli, both at the level of whole-brain responses and for individual visual cortex neurons. Professor Mattingley will discuss the implications of these findings for predictive coding theories of sensory encoding. Professor Jason Mattingley is a Laureate Fellow and Foundation Chair in Cognitive Neuroscience at The University of Queensland. His research is directed toward understanding the brain processes that support perception, selective attention and decision-making, in health and disease.
Encoding local stimulus attributes and higher visual functions in V1 of behaving monkeys
In this lecture, I will present our recent progress on three aspects of population responses in the primary visual cortex: encoding local stimulus attributes, electrical microstimulation and higher visual function. In the first part I will focus on population encoding and reconstruction of contour shapes in V1 and the comparison between monkey and mouse visual responses. In the second part of the talk I will present the effects of microstimulation on neural population in V1 and the relation to evoked saccades. In the final part of the talk I will discuss top-down influences in V1 and their relation to higher visual functions.
Visual processing of feedforward and feedback signals in mouse thalamus
Traditionally, the dorsolateral geniculate nucleus (dLGN) of the thalamus has been considered a feedforward relay station for retinal signals to reach primary visual cortex. The local and long-range circuits of dLGN, however, suggest that this view is not correct. Indeed, besides the thalamo-cortical relay cells, dLGN contains local inhibitory interneurons, and receives not only feedforward input from the retina, but also massive direct and indirect feedback from primary visual cortex. Furthermore, it is one of the earliest processing stages in the visual system that integrates visual information with neuromodulatory signals.
A dynamical model of the visual cortex
In the past several years, I have been involved in building a biologically realistic model of the monkey visual cortex. Work on one of the input layers (4Ca) of the primary visual cortex (V1) is now nearly complete, and I would like to share some of what I have learned with the community. After a brief overview of the model and its capabilities, I would like to focus on three sets of results that represent three different aspects of the modeling. They are: (i) emergent E-I dynamics in local circuits; (ii) how visual cortical neurons acquire their ability to detect edges and directions of motion, and (iii) a view across the cortical surface: nonequilibrium steady states (in analogy with statistical mechanics) and beyond.
What does the primary visual cortex tell us about object recognition?
Understanding sensorimotor control at global and local scales
The brain is remarkably flexible, and appears to instantly reconfigure its processing depending on what’s needed to solve a task at hand: fMRI studies indicate that distal brain areas appear to fluidly couple and decouple with one another depending on behavioral context. But the structural architecture of the brain is comprised of long-range axonal projections that are relatively fixed by adulthood. How does the global dynamism evident in fMRI recordings manifest at a cellular level? To bridge the gap between the activity of single neurons and cortex-wide networks, we correlated electrophysiological recordings of individual neurons in primary visual (V1) and retrosplenial (RSP) associational cortex with activity across dorsal cortex, recorded simultaneously using widefield calcium imaging. We found that individual neurons in both cortical areas independently engaged in different distributed cortical networks depending on the animal’s behavioral state, suggesting that locomotion puts cortex into a more sensory driven mode relevant for navigation.
A Cortical Circuit for Audio-Visual Predictions
Team work makes sensory streams work: our senses work together, learn from each other, and stand in for one another, the result of which is perception and understanding. Learned associations between stimuli in different sensory modalities can shape the way we perceive these stimuli (Mcgurk and Macdonald, 1976). During audio-visual associative learning, auditory cortex is thought to underlie multi-modal plasticity in visual cortex (McIntosh et al., 1998; Mishra et al., 2007; Zangenehpour and Zatorre, 2010). However, it is not well understood how processing in visual cortex is altered by an auditory stimulus that is predictive of a visual stimulus and what the mechanisms are that mediate such experience-dependent, audio-visual associations in sensory cortex. Here we describe a neural mechanism by which an auditory input can shape visual representations of behaviorally relevant stimuli through direct interactions between auditory and visual cortices. We show that the association of an auditory stimulus with a visual stimulus in a behaviorally relevant context leads to an experience-dependent suppression of visual responses in primary visual cortex (V1). Auditory cortex axons carry a mixture of auditory and retinotopically-matched visual input to V1, and optogenetic stimulation of these axons selectively suppresses V1 neurons responsive to the associated visual stimulus after, but not before, learning. Our results suggest that cross-modal associations can be stored in long-range cortical connections and that with learning these cross-modal connections function to suppress the responses to predictable input.
Arousal modulates retinal output
Neural responses in the visual system are usually not purely visual but depend on behavioural and internal states such as arousal. This dependence is seen both in primary visual cortex (V1) and in subcortical brain structures receiving direct retinal input. In this talk, I will show that modulation by behavioural state arises as early as in the output of the retina.To measure retinal activity in the awake, intact brain, we imaged the synaptic boutons of retinal axons in the superficial superior colliculus (sSC) of mice. The activity of about half of the boutons depended not only on vision but also on running speed and pupil size, regardless of retinal illumination. Arousal typically reduced the boutons’ visual responses to preferred direction and their selectivity for direction and orientation.Arousal may affect activity in retinal boutons by presynaptic neuromodulation. To test whether the effects of arousal occur already in the retina, we recorded from retinal axons in the optic tract. We found that, in darkness, more than one third of the recorded axons was significantly correlated with running speed. Arousal had similar effects postsynaptically, in sSC neurons, independent of activity in V1, the other main source of visual inputs to colliculus. Optogenetic inactivation of V1 generally decreased activity in collicular neurons but did not diminish the effects of arousal. These results indicate that arousal modulates activity at every stage of the visual system. In the future, we will study the purpose and the underlying mechanisms of behavioural modulation in the early visual system
High precision coding in visual cortex
Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known if the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher-order visual areas and measured stimulus discrimination thresholds of 0.35 degrees and 0.37 degrees respectively in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, the behavioral variability during a sensory discrimination task could not be explained by neural variability in primary visual cortex. Instead behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that sensory perception in mice is limited by downstream decoders, not by neural noise in sensory representations.
The emergence of contrast invariance in cortical circuits
Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.
Unique Molecular Regulation of Prefrontal Cortex Confers Vulnerability to Cognitive Disorders
The Arnsten lab studies molecular influences on the higher cognitive circuits of the dorsolateral prefrontal cortex (dlPFC), in order to understand mechanisms affecting working memory at the cellular and behavioral levels, with the overarching aim of identifying the actions that render the dlPFC so vulnerable in cognitive disorders. Her lab has shown that the dlPFC has unique neurotransmission and neuromodulation compared to the classic actions found in the primary visual cortex, including mechanisms to rapidly weaken PFC connections during uncontrollable stress. Reduced regulation of these stress pathways due to genetic or environmental insults contributes to dlPFC dysfunction in cognitive disorders, including calcium dysregulation and tau phosphorylation in the aging association cortex. Understanding these unique mechanisms has led to the development of a new treatment, IntunivTM, for a variety of PFC disorders.
Towards multipurpose biophysics-based mathematical models of cortical circuits
Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.
Autism-Associated Shank3 Is Essential for Homeostatic Compensation in Rodent Visual Cortex
Neocortical networks must generate and maintain stable activity patterns despite perturbations induced by learning and experience- dependent plasticity. There is abundant theoretical and experimental evidence that network stability is achieved through homeostatic plasticity mechanisms that adjust synaptic and neuronal properties to stabilize some measure of average activity, and this process has been extensively studied in primary visual cortex (V1), where chronic visual deprivation induces an initial drop in activity and ensemble average firing rates (FRs), but over time activity is restored to baseline despite continued deprivation. Here I discuss recent work from the lab in which we followed this FR homeostasis in individual V1 neurons in freely behaving animals during a prolonged visual deprivation/eye-reopening paradigm. We find that - when FRs are perturbed by manipulating sensory experience - over time they return precisely to a cell-autonomous set-point. Finally, we find that homeostatic plasticity is perturbed in a mouse model of Autism spectrum disorder, and this results in a breakdown of FRH within V1. These data suggest that loss of homeostatic plasticity is one primary cause of excitation/inhibition imbalances in ASD models. Together these studies illuminate the role of stabilizing plasticity mechanisms in the ability of neocortical circuits to recover robust function following challenges to their excitability.
A new computational framework for understanding vision in our brain
Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which creates a bottom-up saliency map to guide the fovea to selected visual locations via gaze shifts. This motivates a new framework that views vision as consisting of encoding, selection, and decoding stages, placing selection on center stage. It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better decoding (recognition), through analysis-by- synthesis, should query for additional information and be mainly directed at the foveal region. Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions.
Circuit dysfunction and sensory processing in Fragile X Syndrome
To uncover the circuit-level alterations that underlie atypical sensory processing associated with autism, we have adopted a symptom-to-circuit approach in theFmr1-/- mouse model of Fragile X syndrome (FXS). Using a go/no-go task and in vivo 2-photon calcium imaging, we find that impaired visual discrimination in Fmr1-/- mice correlates with marked deficits in orientation tuning of principal neurons in primary visual cortex, and a decrease in the activity of parvalbumin (PV) interneurons. Restoring visually evoked activity in PV cells in Fmr1-/- mice with a chemogenetic (DREADD) strategy was sufficient to rescue their behavioural performance. Strikingly, human subjects with FXS exhibit similar impairments in visual discrimination as Fmr1-/- mice. These results suggest that manipulating inhibition may help sensory processing in FXS. More recently, we find that the ability of Fmr1-/- mice to perform the visual discrimination task is also drastically impaired in the presence of visual or auditory distractors, suggesting that sensory hypersensitivity may affect perceptual learning in autism.
High precision coding in visual cortex
Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviourally in mice. The discrepancy between neural and behavioural discrimination could not be explained by the types of stimuli we used, by behavioural states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.
Playing the piano with the cortex: role of neuronal ensembles and pattern completion in perception
The design of neural circuits, with large numbers of neurons interconnected in vast networks, strongly suggest that they are specifically build to generate emergent functional properties (1). To explore this hypothesis, we have developed two-photon holographic methods to selective image and manipulate the activity of neuronal populations in 3D in vivo (2). Using them we find that groups of synchronous neurons (neuronal ensembles) dominate the evoked and spontaneous activity of mouse primary visual cortex (3). Ensembles can be optogenetically imprinted for several days and some of their neurons trigger the entire ensemble (4). By activating these pattern completion cells in ensembles involved in visual discrimination paradigms, we can bi-directionally alter behavioural choices (5). Our results demonstrate that ensembles are necessary and sufficient for visual perception and are consistent with the possibility that neuronal ensembles are the functional building blocks of cortical circuits. 1. R. Yuste, From the neuron doctrine to neural networks. Nat Rev Neurosci 16, 487-497 (2015). 2. L. Carrillo-Reid, W. Yang, J. E. Kang Miller, D. S. Peterka, R. Yuste, Imaging and Optically Manipulating Neuronal Ensembles. Annu Rev Biophys, 46: 271-293 (2017). 3. J. E. Miller, I. Ayzenshtat, L. Carrillo-Reid, R. Yuste, Visual stimuli recruit intrinsically generated cortical ensembles. Proceedings of the National Academy of Sciences of the United States of America 111, E4053-4061 (2014). 4. L. Carrillo-Reid, W. Yang, Y. Bando, D. S. Peterka, R. Yuste, Imprinting and recalling cortical ensembles. Science 353, 691-694 (2016). 5. L. Carrillo-Reid, S. Han, W. Yang, A. Akrouh, R. Yuste, (2019). Controlling visually-guided behaviour by holographic recalling of cortical ensembles. Cell 178, 447-457. DOI:https://doi.org/10.1016/j.cell.2019.05.045.
Intracortical microstimulation in a spiking neural network model of the primary visual cortex
Bernstein Conference 2024
Joint coding of stimulus and behavior by flexible adjustments of sensory tuning in primary visual cortex
Bernstein Conference 2024
Environmental Statistics of Temporally Ordered Stimuli Modify Activity in the Primary Visual Cortex
COSYNE 2022
Feedforward thalamocortical inputs to primary visual cortex are OFF dominant
COSYNE 2022
Feedforward thalamocortical inputs to primary visual cortex are OFF dominant
COSYNE 2022
Inception loops reveal novel spatially-localized phase invariance in mouse primary visual cortex
COSYNE 2022
Inception loops reveal novel spatially-localized phase invariance in mouse primary visual cortex
COSYNE 2022
Relating Divisive Normalization to Modulation of Correlated Variability in Primary Visual Cortex
COSYNE 2022
Relating Divisive Normalization to Modulation of Correlated Variability in Primary Visual Cortex
COSYNE 2022
Representation of sensory uncertainty by neuronal populations in macaque primary visual cortex
COSYNE 2022
Representation of sensory uncertainty by neuronal populations in macaque primary visual cortex
COSYNE 2022
The logic of recurrent circuits in the primary visual cortex
COSYNE 2023
Resilience to sensory uncertainty in the primary visual cortex
COSYNE 2023
Towards understanding the microcircuit in monkey primary visual cortex in-vivo
COSYNE 2023
Visuomotor integration gives rise to three-dimensional receptive fields in the primary visual cortex
COSYNE 2023
Prefrontal cortex subregions provide distinct visual and behavioral feedback modulation to the primary Visual cortex
COSYNE 2025
Deficient ocular dominance plasticity in primary visual cortex of orexin knockout mice
FENS Forum 2024
Driving effect of distal surround stimuli on primary visual cortex firing rates
FENS Forum 2024
Dual encoding of motion cues in feline primary visual cortex
FENS Forum 2024
Dynamic fading memory and expectancy effects in the monkey primary visual cortex
FENS Forum 2024
Instability of orientation coding in mouse primary visual cortex during a visual oddball task
FENS Forum 2024
Intrinsic electrophysiological features of neurons in the lateral prefrontal and primary visual cortex of the common marmoset
FENS Forum 2024
Orexin knockout mice have compromised orientation discrimination and display reduced AMPAR-mediated excitation in L4-2/3 connections in the primary visual cortex
FENS Forum 2024