Neural Populations
neural populations
Dimensionality reduction beyond neural subspaces
Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time
Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.
Flexible selection of task-relevant features through population gating
Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within A1 but controlled by top-down inputs from mPFC, can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.
Signal in the Noise: models of inter-trial and inter-subject neural variability
The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).
A multi-level account of hippocampal function in concept learning from behavior to neurons
A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
Structure, Function, and Learning in Distributed Neuronal Networks
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.
Does human perception rely on probabilistic message passing?
The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.
Neural Population Dynamics for Skilled Motor Control
The ability to reach, grasp, and manipulate objects is a remarkable expression of motor skill, and the loss of this ability in injury, stroke, or disease can be devastating. These behaviors are controlled by the coordinated activity of tens of millions of neurons distributed across many CNS regions, including the primary motor cortex. While many studies have characterized the activity of single cortical neurons during reaching, the principles governing the dynamics of large, distributed neural populations remain largely unknown. Recent work in primates has suggested that during the execution of reaching, motor cortex may autonomously generate the neural pattern controlling the movement, much like the spinal central pattern generator for locomotion. In this seminar, I will describe recent work that tests this hypothesis using large-scale neural recording, high-resolution behavioral measurements, dynamical systems approaches to data analysis, and optogenetic perturbations in mice. We find, by contrast, that motor cortex requires strong, continuous, and time-varying thalamic input to generate the neural pattern driving reaching. In a second line of work, we demonstrate that the cortico-cerebellar loop is not critical for driving the arm towards the target, but instead fine-tunes movement parameters to enable precise and accurate behavior. Finally, I will describe my future plans to apply these experimental and analytical approaches to the adaptive control of locomotion in complex environments.
The role of the primate prefrontal cortex in inferring the state of the world and predicting change
In an ever-changing environment, uncertainty is omnipresent. To deal with this, organisms have evolved mechanisms that allow them to take advantage of environmental regularities in order to make decisions robustly and adjust their behavior efficiently, thus maximizing their chances of survival. In this talk, I will present behavioral evidence that animals perform model-based state inference to predict environmental state changes and adjust their behavior rapidly, rather than slowly updating choice values. This model-based inference process can be described using Bayesian change-point models. Furthermore, I will show that neural populations in the prefrontal cortex accurately predict behavioral switches, and that the activity of these populations is associated with Bayesian estimates. In addition, we will see that learning leads to the emergence of a high-dimensional representational subspace that can be reused when the animals re-learn a previously learned set of action-value associations. Altogether, these findings highlight the role of the PFC in representing a belief about the current state of the world.
Neural circuits that support robust and flexible navigation in dynamic naturalistic environments
Tracking heading within an environment is a fundamental requirement for flexible, goal-directed navigation. In insects, a head-direction representation that guides the animal’s movements is maintained in a conserved brain region called the central complex. Two-photon calcium imaging of genetically targeted neural populations in the central complex of tethered fruit flies behaving in virtual reality (VR) environments has shown that the head-direction representation is updated based on self-motion cues and external sensory information, such as visual features and wind direction. Thus far, the head direction representation has mainly been studied in VR settings that only give flies control of the angular rotation of simple sensory cues. How the fly’s head direction circuitry enables the animal to navigate in dynamic, immersive and naturalistic environments is largely unexplored. I have developed a novel setup that permits imaging in complex VR environments that also accommodate flies’ translational movements. I have previously demonstrated that flies perform visually-guided navigation in such an immersive VR setting, and also that they learn to associate aversive optogenetically-generated heat stimuli with specific visual landmarks. A stable head direction representation is likely necessary to support such behaviors, but the underlying neural mechanisms are unclear. Based on a connectomic analysis of the central complex, I identified likely circuit mechanisms for prioritizing and combining different sensory cues to generate a stable head direction representation in complex, multimodal environments. I am now testing these predictions using calcium imaging in genetically targeted cell types in flies performing 2D navigation in immersive VR.
Suite2p: a multipurpose functional segmentation pipeline for cellular imaging
The combination of two-photon microscopy recordings and powerful calcium-dependent fluorescent sensors enables simultaneous recording of unprecedentedly large populations of neurons. While these sensors have matured over several generations of development, computational methods to process their fluorescence are often inefficient and the results hard to interpret. Here we introduce Suite2p: a fast, accurate, parameter-free and complete pipeline that registers raw movies, detects active and/or inactive cells (using Cellpose), extracts their calcium traces and infers their spike times. Suite2p runs faster than real time on standard workstations and outperforms state-of-the-art methods on newly developed ground-truth benchmarks for motion correction and cell detection.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands and maybe even tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics, and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.
Neural dynamics underlying temporal inference
Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous and is inadvertently transformed as it passes through the nervous system. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. We, however, know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals, and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. Thereafter, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation, and production of timing behaviors.
Hypothalamic control of internal states underlying social behaviors in mice
Social interactions such as mating and fighting are driven by internal emotional states. How can we study internal states of an animal when it cannot tell us its subjective feelings? Especially when the meaning of the animal’s behavior is not clear to us, can we understand the underlying internal states of the animal? In this talk, I will introduce our recent work in which we used male mounting behavior in mice as an example to understand the underlying internal state of the animals. In many animal species, males exhibit mounting behavior toward females as part of the mating behavior repertoire. Interestingly, males also frequently show mounting behavior toward other males of the same species. It is not clear what the underlying motivation is - whether it is reproductive in nature or something distinct. Through detailed analysis of video and audio recordings during social interactions, we found that while male-directed and female-directed mounting behaviors are motorically similar, they can be distinguished by both the presence of ultrasonic vocalization during female-directed mounting (reproductive mounting) and the display of aggression following male-directed mounting (aggressive mounting). Using optogenetics, we further identified genetically defined neural populations in the medial preoptic area (MPOA) that mediate reproductive mounting and the ventrolateral ventromedial hypothalamus (VMHvl) that mediate aggressive mounting. In vivo microendocsopic imaging in MPOA and VMHvl revealed distinct neural ensembles that mainly encode either a reproductive or an aggressive state during which male or female directed mounting occurs. Together, these findings demonstrate that internal states are represented in the hypothalamus and that motorically similar behaviors exhibited under different contexts may reflect distinct internal states.
Reading out responses of large neural population with minimal information loss
Classic studies show that in many species – from leech and cricket to primate – responses of neural populations can be quite successfully read out using a measure neural population activity termed the population vector. However, despite its successes, detailed analyses have shown that the standard population vector discards substantial amounts of information contained in the responses of a neural population, and so is unlikely to accurately describe how signal communication between parts of the nervous system. I will describe recent theoretical results showing how to modify the population vector expression in order to read out neural responses without information loss, ideally. These results make it possible to quantify the contribution of weakly tuned neurons to perception. I will also discuss numerical methods that can be used to minimize information loss when reading out responses of large neural populations.
Inferring brain-wide interactions using data-constrained recurrent neural network models
Behavior arises from the coordinated activity of numerous distinct brain regions. Modern experimental tools allow access to neural populations brain-wide, yet understanding such large-scale datasets necessitates scalable computational models to extract meaningful features of inter-region communication. In this talk, I will introduce Current-Based Decomposition (CURBD), an approach for inferring multi-region interactions using data-constrained recurrent neural network models. I will first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. I will then apply CURBD to understand the brain-wide flow of information leading to behavioral state transitions in larval zebrafish. These examples will establish CURBD as a flexible, scalable framework to infer brain-wide interactions that are inaccessible from experimental measurements alone.
Cortical networks for flexible decisions during spatial navigation
My lab seeks to understand how the mammalian brain performs the computations that underlie cognitive functions, including decision-making, short-term memory, and spatial navigation, at the level of the building blocks of the nervous system, cell types and neural populations organized into circuits. We have developed methods to measure, manipulate, and analyze neural circuits across various spatial and temporal scales, including technology for virtual reality, optical imaging, optogenetics, intracellular electrophysiology, molecular sensors, and computational modeling. I will present recent work that uses large scale calcium imaging to reveal the functional organization of the mouse posterior cortex for flexible decision-making during spatial navigation in virtual reality. I will also discuss work that uses optogenetics and calcium imaging during a variety of decision-making tasks to highlight how cognitive experience and context greatly alter the cortical circuits necessary for navigation decisions.
High precision coding in visual cortex
Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known if the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher-order visual areas and measured stimulus discrimination thresholds of 0.35 degrees and 0.37 degrees respectively in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, the behavioral variability during a sensory discrimination task could not be explained by neural variability in primary visual cortex. Instead behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that sensory perception in mice is limited by downstream decoders, not by neural noise in sensory representations.
Residual population dynamics as a window into neural computation
Neural activity in frontal and motor cortices can be considered to be the manifestation of a dynamical system implemented by large neural populations in recurrently connected networks. The computations emerging from such population-level dynamics reflect the interaction between external inputs into a network and its internal, recurrent dynamics. Isolating these two contributions in experimentally recorded neural activity, however, is challenging, limiting the resulting insights into neural computations. I will present an approach to addressing this challenge based on response residuals, i.e. variability in the population trajectory across repetitions of the same task condition. A complete characterization of residual dynamics is well-suited to systematically compare computations across brain areas and tasks, and leads to quantitative predictions about the consequences of small, arbitrary causal perturbations.
A function approximation perspective on neural representations
Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, I will describe my group's efforts in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. I will present applications to deep learning and neuronal data.
Low dimensional models and electrophysiological experiments to study neural dynamics in songbirds
Birdsong emerges when a set of highly interconnected brain areas manage to generate a complex output. The similarities between birdsong production and human speech have positioned songbirds as unique animal models for studying learning and production of this complex motor skill. In this work, we developed a low dimensional model for a neural network in which the variables were the average activities of different neural populations within the nuclei of the song system. This neural network is active during production, perception and learning of birdsong. We performed electrophysiological experiments to record neural activity from one of these nuclei and found that the low dimensional model could reproduce the neural dynamics observed during the experiments. Also, this model could reproduce the respiratory motor patterns used to generate song. We showed that sparse activity in one of the neural nuclei could drive a more complex activity downstream in the neural network. This interdisciplinary work shows how low dimensional neural models can be a valuable tool for studying the emergence of complex motor tasks
Towards multipurpose biophysics-based mathematical models of cortical circuits
Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.
Detecting Covert Cognitive States from Neural Population Recordings in Prefrontal Cortex
The neural mechanisms underlying decision-making are typically examined by statistical analysis of large numbers of trials from sequentially recorded single neurons. Averaging across sequential recordings, however, obscures important aspects of decision-making such as variations in confidence and 'changes of mind' (CoM) that occur at variable times on different trials. I will show that the covert decision variables (DV) can be tracked dynamically on single behavioral trials via simultaneous recording of large neural populations in prefrontal cortex. Vacillations of the neural DV, in turn, identify candidate CoM in monkeys, which closely match the known properties of human CoM. Thus simultaneous population recordings can provide insight into transient, internal cognitive states that are otherwise undetectable.
Mean-field models for finite-size populations of spiking neurons
Firing-rate (FR) or neural-mass models are widely used for studying computations performed by neural populations. Despite their success, classical firing-rate models do not capture spike timing effects on the microscopic level such as spike synchronization and are difficult to link to spiking data in experimental recordings. For large neuronal populations, the gap between the spiking neuron dynamics on the microscopic level and coarse-grained FR models on the population level can be bridged by mean-field theory formally valid for infinitely many neurons. It remains however challenging to extend the resulting mean-field models to finite-size populations with biologically realistic neuron numbers per cell type (mesoscopic scale). In this talk, I present a mathematical framework for mesoscopic populations of generalized integrate-and-fire neuron models that accounts for fluctuations caused by the finite number of neurons. To this end, I will introduce the refractory density method for quasi-renewal processes and show how this method can be generalized to finite-size populations. To demonstrate the flexibility of this approach, I will show how synaptic short-term plasticity can be incorporated in the mesoscopic mean-field framework. On the other hand, the framework permits a systematic reduction to low-dimensional FR equations using the eigenfunction method. Our modeling framework enables a re-examination of classical FR models in computational neuroscience under biophysically more realistic conditions.
High precision coding in visual cortex
Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviourally in mice. The discrepancy between neural and behavioural discrimination could not be explained by the types of stimuli we used, by behavioural states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.
Neural manifolds for the stable control of movement
Animals perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of population-wide activity patterns: neural modes that capture the dominant co-variation patterns of population activity and define a task specific low dimensional neural manifold. The time-dependent activation of the neural modes results in latent dynamics. We hypothesize that the latent dynamics associated with the consistent execution of a behaviour need to remain stable, and use an alignment method to establish this stability. Once identified, stable latent dynamics allow for the prediction of various behavioural features via fixed decoder models. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behaviour.
Homeostatic gain modulation drives changes in heterogeneity expressed by neural populations
Bernstein Conference 2024
An interpretable dynamic population-rate equation for adapting non-linear spiking neural populations
COSYNE 2022
An interpretable dynamic population-rate equation for adapting non-linear spiking neural populations
COSYNE 2022
Adaptive coding efficiency through joint gain control in neural populations
COSYNE 2023
Dynamical mechanisms of flexible pattern generation in spinal neural populations
COSYNE 2023
The effective number of shared dimensions between neural populations
COSYNE 2023
NetFormer: An interpretable model for recovering dynamical connectivity in neural populations
COSYNE 2025