← Back

Neural Population

Topic spotlight
TopicWorld Wide

neural population

Discover seminars, jobs, and research tagged with neural population across World Wide.
82 curated items55 Seminars27 ePosters
Updated 10 months ago
82 items · neural population
82 results
SeminarNeuroscience

Dimensionality reduction beyond neural subspaces

Alex Cayco Gajic
École Normale Supérieure
Jan 28, 2025

Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.

SeminarNeuroscience

Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging

Alan Jasanoff
Massachusetts Institute of Technology
Jan 27, 2025

Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.

SeminarNeuroscience

Probing neural population dynamics with recurrent neural networks

Chethan Pandarinath
Emory University and Georgia Tech
Jun 11, 2024

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.

SeminarNeuroscience

Epileptic micronetworks and their clinical relevance

Michael Wenzel
Bonn University
Mar 12, 2024

A core aspect of clinical epileptology revolves around relating epileptic field potentials to underlying neural sources (e.g. an “epileptogenic focus”). Yet still, how neural population activity relates to epileptic field potentials and ultimately clinical phenomenology, remains far from being understood. After a brief overview on this topic, this seminar will focus on unpublished work, with an emphasis on seizure-related focal spreading depression. The presented results will include hippocampal and neocortical chronic in vivo two-photon population imaging and local field potential recordings of epileptic micronetworks in mice, in the context of viral encephalitis or optogenetic stimulation. The findings are corroborated by invasive depth electrode recordings (macroelectrodes and BF microwires) in epilepsy patients during pre-surgical evaluation. The presented work carries general implications for clinical epileptology, and basic epilepsy research.

SeminarNeuroscience

Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)

Mehdi Azabou
Feb 21, 2024

Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscience

Visual mechanisms for flexible behavior

Marlene Cohen
University of Chicago
Jan 25, 2024

Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.

SeminarNeuroscienceRecording

Virtual Brain Twins for Brain Medicine and Epilepsy

Viktor Jirsa
Aix Marseille Université - Inserm
Nov 7, 2023

Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.

SeminarNeuroscienceRecording

A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time

Domenica Bueti
SISSA, Trieste (Italy)
Apr 18, 2023

Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.

SeminarNeuroscienceRecording

Minute-scale periodic sequences in medial entorhinal cortex

Soledad Gonzalo Cogno
Norwegian University of Science and Technology, Trondheim
Jan 31, 2023

The medial entorhinal cortex (MEC) hosts many of the brain’s circuit elements for spatial navigation and episodic memory, operations that require neural activity to be organized across long durations of experience. While location is known to be encoded by a plethora of spatially tuned cell types in this brain region, little is known about how the activity of entorhinal cells is tied together over time. Among the brain’s most powerful mechanisms for neural coordination are network oscillations, which dynamically synchronize neural activity across circuit elements. In MEC, theta and gamma oscillations provide temporal structure to the neural population activity at subsecond time scales. It remains an open question, however, whether similarly coordination occurs in MEC at behavioural time scales, in the second-to-minute regime. In this talk I will show that MEC activity can be organized into a minute-scale oscillation that entrains nearly the entire cell population, with periods ranging from 10 to 100 seconds. Throughout this ultraslow oscillation, neural activity progresses in periodic and stereotyped sequences. The oscillation sometimes advances uninterruptedly for tens of minutes, transcending epochs of locomotion and immobility. Similar oscillatory sequences were not observed in neighboring parasubiculum or in visual cortex. The ultraslow periodic sequences in MEC may have the potential to couple its neurons and circuits across extended time scales and to serve as a scaffold for processes that unfold at behavioural time scales.

SeminarNeuroscienceRecording

Flexible selection of task-relevant features through population gating

Joao Barbosa
Ostojic lab, Ecole Normale Superieure
Dec 6, 2022

Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within A1 but controlled by top-down inputs from mPFC, can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 8, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscienceRecording

A multi-level account of hippocampal function in concept learning from behavior to neurons

Rob Mok
University of Cambridge
Nov 1, 2022

A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.

SeminarNeuroscience

Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties

SueYeon Chung
NYU/Flatiron
Sep 15, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.

SeminarNeuroscience

From Computation to Large-scale Neural Circuitry in Human Belief Updating

Tobias Donner
University Medical Center Hamburg-Eppendorf
Jun 28, 2022

Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.

SeminarNeuroscience

Dissecting the neural processes supporting perceptual learning

Wu Li
Beijing Normal University, Beijing, China
Mar 27, 2022

The brain and its inherent functions can be modified by various forms of learning. Learning-induced changes are seen even in basic perceptual functions. In particular, repeated training in a perceptual task can lead to a significant improvement in the trained task—a phenomenon known as perceptual learning. There has been a long-standing debate about the mechanisms of perceptual learning. In this talk, I will present results from our series of electrophysiological studies. These studies have consistently shown that perceptual learning is mediated by concerted changes in both perceptual and cognitive processes, resulting in improved sensory representation, enhanced top-down influences, and refined readout process.

SeminarNeuroscienceRecording

Structure, Function, and Learning in Distributed Neuronal Networks

SueYeon Chung
Flatiron Institute/NYU
Jan 25, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.

SeminarNeuroscienceRecording

Does human perception rely on probabilistic message passing?

Alex Hyafil
CRM, Barcelona
Dec 21, 2021

The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 1, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Keynote: Latent variable modeling of neural population dynamics - where do we go from here?

Chethan Pandarinath
Georgia Tech & Emory University
Nov 30, 2021

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present machine learning frameworks that enable inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales, from diverse brain areas, and without regard to behavior. I will then demonstrate extensions that allow recovery of dynamics from two-photon calcium imaging data with surprising precision. Finally, I will discuss our efforts to facilitate comparisons within our field by curating datasets and standardizing model evaluation, including a currently active modeling challenge, the 2021 Neural Latents Benchmark [neurallatents.github.io].

SeminarNeuroscience

Advancing Brain-Computer Interfaces by adopting a neural population approach

Juan Alvaro Gallego
Imperial College London
Nov 29, 2021

Brain-computer interfaces (BCIs) have afforded paralysed users “mental control” of computer cursors and robots, and even of electrical stimulators that reanimate their own limbs. Most existing BCIs map the activity of hundreds of motor cortical neurons recorded with implanted electrodes into control signals to drive these devices. Despite these impressive advances, the field is facing a number of challenges that need to be overcome in order for BCIs to become widely used during daily living. In this talk, I will focus on two such challenges: 1) having BCIs that allow performing a broad range of actions; and 2) having BCIs whose performance is robust over long time periods. I will present recent studies from our group in which we apply neuroscientific findings to address both issues. This research is based on an emerging view about how the brain works. Our proposal is that brain function is not based on the independent modulation of the activity of single neurons, but rather on specific population-wide activity patters —which mathematically define a “neural manifold”. I will provide evidence in favour of such a neural manifold view of brain function, and illustrate how advances in systems neuroscience may be critical for the clinical success of BCIs.

SeminarNeuroscienceRecording

Neural Population Dynamics for Skilled Motor Control

Britton Sauerbrei
Case Western Reserve University School of Medicine
Nov 3, 2021

The ability to reach, grasp, and manipulate objects is a remarkable expression of motor skill, and the loss of this ability in injury, stroke, or disease can be devastating. These behaviors are controlled by the coordinated activity of tens of millions of neurons distributed across many CNS regions, including the primary motor cortex. While many studies have characterized the activity of single cortical neurons during reaching, the principles governing the dynamics of large, distributed neural populations remain largely unknown. Recent work in primates has suggested that during the execution of reaching, motor cortex may autonomously generate the neural pattern controlling the movement, much like the spinal central pattern generator for locomotion. In this seminar, I will describe recent work that tests this hypothesis using large-scale neural recording, high-resolution behavioral measurements, dynamical systems approaches to data analysis, and optogenetic perturbations in mice. We find, by contrast, that motor cortex requires strong, continuous, and time-varying thalamic input to generate the neural pattern driving reaching. In a second line of work, we demonstrate that the cortico-cerebellar loop is not critical for driving the arm towards the target, but instead fine-tunes movement parameters to enable precise and accurate behavior. Finally, I will describe my future plans to apply these experimental and analytical approaches to the adaptive control of locomotion in complex environments.

SeminarNeuroscience

A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice

David Liu
University of Cambridge
Oct 26, 2021

Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.

SeminarNeuroscienceRecording

The role of the primate prefrontal cortex in inferring the state of the world and predicting change

Ramon Bartolo
Averbeck lab, Nation Institute of Mental Health
Sep 7, 2021

In an ever-changing environment, uncertainty is omnipresent. To deal with this, organisms have evolved mechanisms that allow them to take advantage of environmental regularities in order to make decisions robustly and adjust their behavior efficiently, thus maximizing their chances of survival. In this talk, I will present behavioral evidence that animals perform model-based state inference to predict environmental state changes and adjust their behavior rapidly, rather than slowly updating choice values. This model-based inference process can be described using Bayesian change-point models. Furthermore, I will show that neural populations in the prefrontal cortex accurately predict behavioral switches, and that the activity of these populations is associated with Bayesian estimates. In addition, we will see that learning leads to the emergence of a high-dimensional representational subspace that can be reused when the animals re-learn a previously learned set of action-value associations. Altogether, these findings highlight the role of the PFC in representing a belief about the current state of the world.

SeminarNeuroscience

Neural circuits that support robust and flexible navigation in dynamic naturalistic environments

Hannah Haberkern
HHMI Janelia Research Campus
Aug 15, 2021

Tracking heading within an environment is a fundamental requirement for flexible, goal-directed navigation. In insects, a head-direction representation that guides the animal’s movements is maintained in a conserved brain region called the central complex. Two-photon calcium imaging of genetically targeted neural populations in the central complex of tethered fruit flies behaving in virtual reality (VR) environments has shown that the head-direction representation is updated based on self-motion cues and external sensory information, such as visual features and wind direction. Thus far, the head direction representation has mainly been studied in VR settings that only give flies control of the angular rotation of simple sensory cues. How the fly’s head direction circuitry enables the animal to navigate in dynamic, immersive and naturalistic environments is largely unexplored. I have developed a novel setup that permits imaging in complex VR environments that also accommodate flies’ translational movements. I have previously demonstrated that flies perform visually-guided navigation in such an immersive VR setting, and also that they learn to associate aversive optogenetically-generated heat stimuli with specific visual landmarks. A stable head direction representation is likely necessary to support such behaviors, but the underlying neural mechanisms are unclear. Based on a connectomic analysis of the central complex, I identified likely circuit mechanisms for prioritizing and combining different sensory cues to generate a stable head direction representation in complex, multimodal environments. I am now testing these predictions using calcium imaging in genetically targeted cell types in flies performing 2D navigation in immersive VR.

SeminarNeuroscience

Low Dimensional Manifolds for Neural Dynamics

Sara A. Solla
Northwestern University
Jun 8, 2021

The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. As an example, we focus on the ability to execute learned actions in a reliable and stable manner. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. The stable latent dynamics, once identified, allows for the prediction of various behavioral features, using models whose parameters remain fixed throughout long timespans. We posit that latent cortical dynamics within the manifold are the fundamental and stable building blocks underlying consistent behavioral execution.

SeminarOpen SourceRecording

Suite2p: a multipurpose functional segmentation pipeline for cellular imaging

Carsen Stringer
HHMI Janelia Research Campus
May 20, 2021

The combination of two-photon microscopy recordings and powerful calcium-dependent fluorescent sensors enables simultaneous recording of unprecedentedly large populations of neurons. While these sensors have matured over several generations of development, computational methods to process their fluorescence are often inefficient and the results hard to interpret. Here we introduce Suite2p: a fast, accurate, parameter-free and complete pipeline that registers raw movies, detects active and/or inactive cells (using Cellpose), extracts their calcium traces and infers their spike times. Suite2p runs faster than real time on standard workstations and outperforms state-of-the-art methods on newly developed ground-truth benchmarks for motion correction and cell detection.

SeminarNeuroscienceRecording

Neural dynamics underlying temporal inference

Devika Narain
Erasmus Medical Centre
Apr 26, 2021

Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous and is inadvertently transformed as it passes through the nervous system. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. We, however, know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals, and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. Thereafter, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation, and production of timing behaviors.

SeminarNeuroscience

Hypothalamic control of internal states underlying social behaviors in mice

Tomomi Karigo
California Institute of Technology
Apr 25, 2021

Social interactions such as mating and fighting are driven by internal emotional states. How can we study internal states of an animal when it cannot tell us its subjective feelings? Especially when the meaning of the animal’s behavior is not clear to us, can we understand the underlying internal states of the animal? In this talk, I will introduce our recent work in which we used male mounting behavior in mice as an example to understand the underlying internal state of the animals. In many animal species, males exhibit mounting behavior toward females as part of the mating behavior repertoire. Interestingly, males also frequently show mounting behavior toward other males of the same species. It is not clear what the underlying motivation is - whether it is reproductive in nature or something distinct. Through detailed analysis of video and audio recordings during social interactions, we found that while male-directed and female-directed mounting behaviors are motorically similar, they can be distinguished by both the presence of ultrasonic vocalization during female-directed mounting (reproductive mounting) and the display of aggression following male-directed mounting (aggressive mounting). Using optogenetics, we further identified genetically defined neural populations in the medial preoptic area (MPOA) that mediate reproductive mounting and the ventrolateral ventromedial hypothalamus (VMHvl) that mediate aggressive mounting. In vivo microendocsopic imaging in MPOA and VMHvl revealed distinct neural ensembles that mainly encode either a reproductive or an aggressive state during which male or female directed mounting occurs. Together, these findings demonstrate that internal states are represented in the hypothalamus and that motorically similar behaviors exhibited under different contexts may reflect distinct internal states.

SeminarNeuroscienceRecording

Reading out responses of large neural population with minimal information loss

Tatyana Sharpee
Salk Institute for Biological Studies
Apr 8, 2021

Classic studies show that in many species – from leech and cricket to primate – responses of neural populations can be quite successfully read out using a measure neural population activity termed the population vector. However, despite its successes, detailed analyses have shown that the standard population vector discards substantial amounts of information contained in the responses of a neural population, and so is unlikely to accurately describe how signal communication between parts of the nervous system. I will describe recent theoretical results showing how to modify the population vector expression in order to read out neural responses without information loss, ideally. These results make it possible to quantify the contribution of weakly tuned neurons to perception. I will also discuss numerical methods that can be used to minimize information loss when reading out responses of large neural populations.

SeminarNeuroscienceRecording

Inferring brain-wide interactions using data-constrained recurrent neural network models

Matthew Perich
Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 23, 2021

Behavior arises from the coordinated activity of numerous distinct brain regions. Modern experimental tools allow access to neural populations brain-wide, yet understanding such large-scale datasets necessitates scalable computational models to extract meaningful features of inter-region communication. In this talk, I will introduce Current-Based Decomposition (CURBD), an approach for inferring multi-region interactions using data-constrained recurrent neural network models. I will first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. I will then apply CURBD to understand the brain-wide flow of information leading to behavioral state transitions in larval zebrafish. These examples will establish CURBD as a flexible, scalable framework to infer brain-wide interactions that are inaccessible from experimental measurements alone.

SeminarNeuroscienceRecording

Restless engrams: the origin of continually reconfiguring neural representations

Timothy O'Leary
University of Cambridge
Mar 4, 2021

During learning, populations of neurons alter their connectivity and activity patterns, enabling the brain to construct a model of the external world. Conventional wisdom holds that the durability of a such a model is reflected in the stability of neural responses and the stability of synaptic connections that form memory engrams. However, recent experimental findings have challenged this idea, revealing that neural population activity in circuits involved in sensory perception, motor planning and spatial memory continually change over time during familiar behavioural tasks. This continual change suggests significant redundancy in neural representations, with many circuit configurations providing equivalent function. I will describe recent work that explores the consequences of such redundancy for learning and for task representation. Despite large changes in neural activity, we find cortical responses in sensorimotor tasks admit a relatively stable readout at the population level. Furthermore, we find that redundancy in circuit connectivity can make a task easier to learn and compensate for deficiencies in biological learning rules. Finally, if neuronal connections are subject to an unavoidable level of turnover, the level of plasticity required to optimally maintain a memory is generally lower than the total change due to turnover itself, predicting continual reconfiguration of an engram.

SeminarNeuroscienceRecording

Cortical networks for flexible decisions during spatial navigation

Christopher Harvey
Harvard University
Feb 18, 2021

My lab seeks to understand how the mammalian brain performs the computations that underlie cognitive functions, including decision-making, short-term memory, and spatial navigation, at the level of the building blocks of the nervous system, cell types and neural populations organized into circuits. We have developed methods to measure, manipulate, and analyze neural circuits across various spatial and temporal scales, including technology for virtual reality, optical imaging, optogenetics, intracellular electrophysiology, molecular sensors, and computational modeling. I will present recent work that uses large scale calcium imaging to reveal the functional organization of the mouse posterior cortex for flexible decision-making during spatial navigation in virtual reality. I will also discuss work that uses optogenetics and calcium imaging during a variety of decision-making tasks to highlight how cognitive experience and context greatly alter the cortical circuits necessary for navigation decisions.

SeminarNeuroscienceRecording

High precision coding in visual cortex

Carsen Stringer
Janelia
Jan 7, 2021

Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known if the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher-order visual areas and measured stimulus discrimination thresholds of 0.35 degrees and 0.37 degrees respectively in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, the behavioral variability during a sensory discrimination task could not be explained by neural variability in primary visual cortex. Instead behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that sensory perception in mice is limited by downstream decoders, not by neural noise in sensory representations.

SeminarNeuroscienceRecording

Residual population dynamics as a window into neural computation

Valerio Mante
ETH Zurich
Dec 3, 2020

Neural activity in frontal and motor cortices can be considered to be the manifestation of a dynamical system implemented by large neural populations in recurrently connected networks. The computations emerging from such population-level dynamics reflect the interaction between external inputs into a network and its internal, recurrent dynamics. Isolating these two contributions in experimentally recorded neural activity, however, is challenging, limiting the resulting insights into neural computations. I will present an approach to addressing this challenge based on response residuals, i.e. variability in the population trajectory across repetitions of the same task condition. A complete characterization of residual dynamics is well-suited to systematically compare computations across brain areas and tasks, and leads to quantitative predictions about the consequences of small, arbitrary causal perturbations.

SeminarNeuroscienceRecording

A function approximation perspective on neural representations

Cengiz Pehlevan
Harvard University
Dec 1, 2020

Activity patterns of neural populations in natural and artificial neural networks constitute representations of data. The nature of these representations and how they are learned are key questions in neuroscience and deep learning. In his talk, I will describe my group's efforts in building a theory of representations as feature maps leading to sample efficient function approximation. Kernel methods are at the heart of these developments. I will present applications to deep learning and neuronal data.

SeminarNeuroscience

Low dimensional models and electrophysiological experiments to study neural dynamics in songbirds

Ana Amador
University of Buenos Aires
Dec 1, 2020

Birdsong emerges when a set of highly interconnected brain areas manage to generate a complex output. The similarities between birdsong production and human speech have positioned songbirds as unique animal models for studying learning and production of this complex motor skill. In this work, we developed a low dimensional model for a neural network in which the variables were the average activities of different neural populations within the nuclei of the song system. This neural network is active during production, perception and learning of birdsong. We performed electrophysiological experiments to record neural activity from one of these nuclei and found that the low dimensional model could reproduce the neural dynamics observed during the experiments. Also, this model could reproduce the respiratory motor patterns used to generate song. We showed that sparse activity in one of the neural nuclei could drive a more complex activity downstream in the neural network. This interdisciplinary work shows how low dimensional neural models can be a valuable tool for studying the emergence of complex motor tasks

SeminarNeuroscience

Towards generalized inference of single-trial neural population dynamics

Chethan Pandarinath
Emory University, Department of Biomedical Engeering
Oct 20, 2020
SeminarNeuroscience

Towards multipurpose biophysics-based mathematical models of cortical circuits

Gaute Einevoll
Norwegian University of Life Sciences
Oct 13, 2020

Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.

SeminarNeuroscienceRecording

Neural Population Perspectives on Learning and Motor Control

Aaron Batista
University of Pittsburgh
Oct 8, 2020

Learning is a population phenomenon. Since it is the organized activity of populations of neurons that cause movement, learning a new skill must involve reshaping those population activity patterns. Seeing how the brain does this has been elusive, but a brain-computer interface approach can yield new insight. We presented monkeys with novel BCI mappings that we knew would be difficult for them to learn how to control. Over several days, we observed the emergence of new patterns of neural activity that endowed the animals with the ability to perform better at the BCI task. We speculate that there also exists a direct relationship between new patterns of neural activity and new abilities during natural movements, but it is much harder to see in that setting.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 24, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

SeminarNeuroscienceRecording

Detecting Covert Cognitive States from Neural Population Recordings in Prefrontal Cortex

William Newsome
Stanford University
Jun 30, 2020

The neural mechanisms underlying decision-making are typically examined by statistical analysis of large numbers of trials from sequentially recorded single neurons. Averaging across sequential recordings, however, obscures important aspects of decision-making such as variations in confidence and 'changes of mind' (CoM) that occur at variable times on different trials. I will show that the covert decision variables (DV) can be tracked dynamically on single behavioral trials via simultaneous recording of large neural populations in prefrontal cortex. Vacillations of the neural DV, in turn, identify candidate CoM in monkeys, which closely match the known properties of human CoM. Thus simultaneous population recordings can provide insight into transient, internal cognitive states that are otherwise undetectable.

SeminarNeuroscienceRecording

Mean-field models for finite-size populations of spiking neurons

Tilo Schwalger
TU Berlin
Jun 7, 2020

Firing-rate (FR) or neural-mass models are widely used for studying computations performed by neural populations. Despite their success, classical firing-rate models do not capture spike timing effects on the microscopic level such as spike synchronization and are difficult to link to spiking data in experimental recordings. For large neuronal populations, the gap between the spiking neuron dynamics on the microscopic level and coarse-grained FR models on the population level can be bridged by mean-field theory formally valid for infinitely many neurons. It remains however challenging to extend the resulting mean-field models to finite-size populations with biologically realistic neuron numbers per cell type (mesoscopic scale). In this talk, I present a mathematical framework for mesoscopic populations of generalized integrate-and-fire neuron models that accounts for fluctuations caused by the finite number of neurons. To this end, I will introduce the refractory density method for quasi-renewal processes and show how this method can be generalized to finite-size populations. To demonstrate the flexibility of this approach, I will show how synaptic short-term plasticity can be incorporated in the mesoscopic mean-field framework. On the other hand, the framework permits a systematic reduction to low-dimensional FR equations using the eigenfunction method. Our modeling framework enables a re-examination of classical FR models in computational neuroscience under biophysically more realistic conditions.

SeminarNeuroscience

High precision coding in visual cortex

Carsen Stringer
HHMI Janelia Research Campus
Jun 3, 2020

Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviourally in mice. The discrepancy between neural and behavioural discrimination could not be explained by the types of stimuli we used, by behavioural states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.

SeminarNeuroscience

Cortical circuits for olfactory navigation

Cindy Poo
Champalimaud
May 13, 2020

Olfactory navigation is essential for the survival of living beings from unicellular organisms to mammals. In the wild, rodents combine odor information with an internal spatial representation of the environment for foraging and navigation. What are the neural circuits in the brain that implement these behaviours? My research addresses this question by examining the synaptic circuits and neural population activity in the olfactory cortex to understand the integration of olfactory and spatial information. Primary olfactory (piriform) cortex (PCx) has long been recognized as a highly associative brain structure. What is the behavioural and functional role of these associative synapses in PCx? We designed an odor-cued navigation task, where rats must use both olfactory and spatial information to obtain water rewards. We recorded from populations of posterior piriform cortex (pPCx) neurons during behaviour and found that individual neurons were not only odor-selective, but also fired differentially to the same odor sampled at different locations, forming an “olfactory place map”. Spatial locations can be decoded from simultaneously recorded pPCx population, and spatial selectivity is maintained in the absence of odors, across behavioural contexts. This novel olfactory place map is consistent with our finding for a dominant role of associative excitatory synapses in shaping PCx representations, and suggest a role for PCx spatial representations in supporting olfactory navigation. This work not only provides insight into the neural basis for how odors can be used for navigation, but also reveals PCx as a prime site for addressing the general question of how sensory information is anchored within memory systems and combined with cognitive maps to guide flexible behaviour.

SeminarNeuroscienceRecording

Decoding of Chemical Information from Populations of Olfactory Neurons

Pedro Herrero-Vidal
New York University
May 5, 2020

Information is represented in the brain by the coordinated activity of populations of neurons. Recent large-scale neural recording methods in combination with machine learning algorithms are helping understand how sensory processing and cognition emerge from neural population activity. This talk will explore the most popular machine learning methods used to gather meaningful low-dimensional representations from higher-dimensional neural recordings. To illustrate the potential of these approaches, Pedro will present his research in which chemical information is decoded from the olfactory system of the mouse for technological applications. Pedro and co-researchers have successfully extracted odor identity and concentration from olfactory receptor neuron low-dimensional activity trajectories. They have further developed a novel method to identify a shared latent space that allowed decoding of odor information across animals.

SeminarNeuroscienceRecording

Neural manifolds for the stable control of movement

Sara Solla
Northwestern University
Apr 28, 2020

Animals perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of population-wide activity patterns: neural modes that capture the dominant co-variation patterns of population activity and define a task specific low dimensional neural manifold. The time-dependent activation of the neural modes results in latent dynamics. We hypothesize that the latent dynamics associated with the consistent execution of a behaviour need to remain stable, and use an alignment method to establish this stability. Once identified, stable latent dynamics allow for the prediction of various behavioural features via fixed decoder models. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behaviour.

ePoster

How connection probability shapes fluctuations of neural population dynamics

Nils Greven, Jonas Ranft, Tilo Schwalger

Bernstein Conference 2024

ePoster

Homeostatic gain modulation drives changes in heterogeneity expressed by neural populations

Daniel Trotter, Taufik Valiante, Jeremie Lefebvre

Bernstein Conference 2024

ePoster

Optimal control of oscillations and synchrony in nonlinear models of neural population dynamics

Lena Salfenmoser, Klaus Obermayer

Bernstein Conference 2024

ePoster

An interpretable dynamic population-rate equation for adapting non-linear spiking neural populations

COSYNE 2022

ePoster

An interpretable dynamic population-rate equation for adapting non-linear spiking neural populations

COSYNE 2022

ePoster

Mechanistic modeling of Drosophila neural population codes in natural social communication

COSYNE 2022

ePoster

Mechanistic modeling of Drosophila neural population codes in natural social communication

COSYNE 2022

ePoster

Nonlinear manifolds underlie neural population activity during behaviour

COSYNE 2022

ePoster

Nonlinear manifolds underlie neural population activity during behaviour

COSYNE 2022

ePoster

Adaptive coding efficiency through joint gain control in neural populations

Lyndon Duong, David Lipshutz, David Heeger, Dmitri Chklovskii, Eero Simoncelli

COSYNE 2023

ePoster

Dynamical mechanisms of flexible pattern generation in spinal neural populations

Lahiru Wimalasena, Chethan Pandarinath, Nicholas Au Yong

COSYNE 2023

ePoster

The effective number of shared dimensions between neural populations

Hamza Giaffar, Camille Rullan, Mikio Aoi

COSYNE 2023

ePoster

Homeostatic synaptic scaling optimizes learning in network models of neural population codes

Jonathan Mayzel & Elad Schneidman

COSYNE 2023

ePoster

Invertible readouts to improve the dynamical accuracy of neural population models

Christopher Versteeg, Andrew Sedler, Chethan Pandarinath

COSYNE 2023

ePoster

Neural population dynamics of computing with synaptic modulations

Stefan Mihalas & Kyle Aitken

COSYNE 2023

ePoster

Neural Population Geometry across model scale: A tool for cross-species functional comparison of visual brain regions

Arna Ghosh, Kumar Krishna Agrawal, Zahraa Chorghay, Arnab Kumar Mondal, Blake A. Richards

COSYNE 2023

ePoster

Changes in tuning curves, not neural population covariance, improve category separability in the primate ventral visual pathway

Jenelle Feather, Long Sha, Gouki Okazawa, Nga Yu Lo, SueYeon Chung, Roozbeh Kiani

COSYNE 2025

ePoster

Closed-loop electrical microstimulation to create neural population activity states

Yuki Minai, Joana Soldado-Magraner, Matthew Smith, Byron Yu

COSYNE 2025

ePoster

Comparing noisy neural population dynamics using optimal transport distances

Amin Nejatbakhsh, Victor Geadah, Alex Williams, David Lipshutz

COSYNE 2025

ePoster

How connection probability shapes fluctuations of neural population dynamics

Tilo Schwalger, Nils Greven, Jonas Ranft

COSYNE 2025

ePoster

Expectation-modulated temporal dynamics in a sensory neural population during behavior.

Julia Gorman, Tim Gentner, Timothy Sainburg, Trevor McPherson

COSYNE 2025

ePoster

A model linking neural population activity to flexibility in sensorimotor control

Hari Teja Kalidindi, Frederic Crevecoeur

COSYNE 2025

ePoster

NetFormer: An interpretable model for recovering dynamical connectivity in neural populations

Wuwei Zhang, Ziyu Lu, Trung Le, Hao Wang, Uygar Sumbul, Eric Shea-Brown, Lu Mi

COSYNE 2025

ePoster

Sensory expectations shape neural population dynamics during reaching

Jonathan A Michaels, Mehrdad Kashefi, Jack Zheng, Olivier Codol, Jeffrey Weiler, Rhonda Kersten, Paul L. Gribble, Jorn Diedrichsen, Andrew Pruszynski

COSYNE 2025

ePoster

Mutual information manifold inference for studying neural population dynamics

Michael Kareithi, Pier Luigi Dragotti, Simon R. Schultz

FENS Forum 2024

ePoster

Stability of hypothalamic neural population activity during sleep-wake states

Yudong Yan, Nicolò Calcini, Thomas Rusterholz, Antoine Adamantidis

FENS Forum 2024

ePoster

Modeling neural population responses to intracortical microstimulation

Joel Ye

Neuromatch 5