Neural Manifold
neural manifold
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
Parametric control of flexible timing through low-dimensional neural manifolds
Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.
Structure, Function, and Learning in Distributed Neuronal Networks
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.
Advancing Brain-Computer Interfaces by adopting a neural population approach
Brain-computer interfaces (BCIs) have afforded paralysed users “mental control” of computer cursors and robots, and even of electrical stimulators that reanimate their own limbs. Most existing BCIs map the activity of hundreds of motor cortical neurons recorded with implanted electrodes into control signals to drive these devices. Despite these impressive advances, the field is facing a number of challenges that need to be overcome in order for BCIs to become widely used during daily living. In this talk, I will focus on two such challenges: 1) having BCIs that allow performing a broad range of actions; and 2) having BCIs whose performance is robust over long time periods. I will present recent studies from our group in which we apply neuroscientific findings to address both issues. This research is based on an emerging view about how the brain works. Our proposal is that brain function is not based on the independent modulation of the activity of single neurons, but rather on specific population-wide activity patters —which mathematically define a “neural manifold”. I will provide evidence in favour of such a neural manifold view of brain function, and illustrate how advances in systems neuroscience may be critical for the clinical success of BCIs.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. As an example, we focus on the ability to execute learned actions in a reliable and stable manner. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. The stable latent dynamics, once identified, allows for the prediction of various behavioral features, using models whose parameters remain fixed throughout long timespans. We posit that latent cortical dynamics within the manifold are the fundamental and stable building blocks underlying consistent behavioral execution.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands and maybe even tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics, and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.
Leveraging neural manifolds to advance brain-computer interfaces
Brain-computer interfaces (BCIs) have afforded paralysed users “mental control” of computer cursors and robots, and even of electrical stimulators that reanimate their own limbs. Most existing BCIs map the activity of hundreds of motor cortical neurons recorded with implanted electrodes into control signals to drive these devices. Despite these impressive advances, the field is facing a number of challenges that need to be overcome in order for BCIs to become widely used during daily living. In this talk, I will focus on two such challenges: 1) having BCIs that allow performing a broad range of actions; and 2) having BCIs whose performance is robust over long time periods. I will present recent studies from our group in which we apply neuroscientific findings to address both issues. This research is based on an emerging view about how the brain works. Our proposal is that brain function is not based on the independent modulation of the activity of single neurons, but rather on specific population-wide activity patters —which mathematically define a “neural manifold”. I will provide evidence in favour of such a neural manifold view of brain function, and illustrate how advances in systems neuroscience may be critical for the clinical success of BCIs.
Neural manifolds for the stable control of movement
Animals perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of population-wide activity patterns: neural modes that capture the dominant co-variation patterns of population activity and define a task specific low dimensional neural manifold. The time-dependent activation of the neural modes results in latent dynamics. We hypothesize that the latent dynamics associated with the consistent execution of a behaviour need to remain stable, and use an alignment method to establish this stability. Once identified, stable latent dynamics allow for the prediction of various behavioural features via fixed decoder models. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behaviour.
Recurrent network models of adaptive and maladaptive learning
During periods of persistent and inescapable stress, animals can switch from active to passive coping strategies to manage effort-expenditure. Such normally adaptive behavioural state transitions can become maladaptive in disorders such as depression. We developed a new class of multi-region recurrent neural network (RNN) models to infer brain-wide interactions driving such maladaptive behaviour. The models were trained to match experimental data across two levels simultaneously: brain-wide neural dynamics from 10-40,000 neurons and the realtime behaviour of the fish. Analysis of the trained RNN models revealed a specific change in inter-area connectivity between the habenula (Hb) and raphe nucleus during the transition into passivity. We then characterized the multi-region neural dynamics underlying this transition. Using the interaction weights derived from the RNN models, we calculated the input currents from different brain regions to each Hb neuron. We then computed neural manifolds spanning these input currents across all Hb neurons to define subspaces within the Hb activity that captured communication with each other brain region independently. At the onset of stress, there was an immediate response within the Hb/raphe subspace alone. However, RNN models identified no early or fast-timescale change in the strengths of interactions between these regions. As the animal lapsed into passivity, the responses within the Hb/raphe subspace decreased, accompanied by a concomitant change in the interactions between the raphe and Hb inferred from the RNN weights. This innovative combination of network modeling and neural dynamics analysis points to dual mechanisms with distinct timescales driving the behavioural state transition: early response to stress is mediated by reshaping the neural dynamics within a preserved network architecture, while long-term state changes correspond to altered connectivity between neural ensembles in distinct brain regions.
Assessing Neural Manifold Properties With Adapted Normalizing Flows
Bernstein Conference 2024
Dynamic control of neural manifolds
Bernstein Conference 2024
Linking Neural Manifolds to Circuit Structure in Recurrent Networks
Bernstein Conference 2024
Neural manifold discovery via dynamical systems
Bernstein Conference 2024
How coding constraints affect the shape of neural manifolds
COSYNE 2022
Neural Manifolds Underlying Naturalistic Human Movements in Electrocorticography
COSYNE 2023
A probabilistic framework for task-aligned intra- and inter-area neural manifold estimation
COSYNE 2023
Reliable neural manifold decoding using low-distortion alignment of tangent spaces
COSYNE 2023
Spatiotemporally Localized Norepinephrine Modulates the Prefrontal Neural Manifold
COSYNE 2023
Embedding dimension of neural manifolds and the structure of mixed selectivity
COSYNE 2025
Neural manifold discovery via dynamical systems
COSYNE 2025