Neural Dynamics
neural dynamics
Prof Mario Dipoppa
We are looking for candidates, who are eager to solve fundamental questions with a creative mindset. Candidates should have a strong publication track record in Computational Neuroscience or a related quantitative field, including but not limited to Computer Science, Machine Learning, Engineering, Bioinformatics, Physics, Mathematics, and Statistics. Candidates holding a Ph.D. degree interested in joining the laboratory as postdoctoral researchers should submit a CV including a publication list, a copy of a first-authored publication, a research statement describing past research and career goals (max. two pages), and contact information for two academic referees. The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code. Our research team, the Theoretical and Computational Neuroscience Laboratory, is on the main UCLA campus and enjoys close collaborations with the world-class neuroscience community there. The lab, led by Mario Dipoppa, is a cooperative and vibrant environment where all members are offered excellent scientific training and career mentoring. We strongly encourage candidates to apply early as applications will be reviewed until the positions are filled. The positions are available immediately with a flexible starting date. Please submit the application material as a single PDF file with your full name in the file name to mdipoppa@g.ucla.edu. Informal inquiries are welcome. For more details visit www.dipoppalab.com.
Prof Mario Dipoppa
We are looking for candidates with a keen interest in gaining research experience in Computational Neuroscience, pursuing their own projects, and supporting those of other team members. Candidates should have a bachelor's or master's degree in a quantitative discipline and strong programming skills, ideally in Python. Candidates interested in joining the laboratory as research associates should send a CV, a research statement describing past research and career goals (max. one page), and contact information for two academic referees. The selected candidates will be working on questions addressing how brain computations emerge from the dynamics of the underlying neural circuits and how the neural code is shaped by computational needs and biological constraints of the brain. To tackle these questions, we employ a multidisciplinary approach that combines state-of-the-art modeling techniques and theoretical frameworks, which include but are not limited to data-driven circuit models, biologically realistic deep learning models, abstract neural network models, machine learning methods, and analysis of the neural code. Our research team, the Theoretical and Computational Neuroscience Laboratory, is on the main UCLA campus and enjoys close collaborations with the world-class neuroscience community there. The lab, led by Mario Dipoppa, is a cooperative and vibrant environment where all members are offered excellent scientific training and career mentoring. We strongly encourage candidates to apply early as applications will be reviewed until the positions are filled. The positions are available immediately with a flexible starting date. Please submit the application material as a single PDF file with your full name in the file name to mdipoppa@g.ucla.edu. Informal inquiries are welcome. For more details visit www.dipoppalab.com.
Flavia Mancini
1 Postdoc: Simulating & modelling neural dynamics involved in statistical/aversive learning and homeostatic/pain regulation with the scope to develop new projects. 1 Research Assistant: Conducting behavioral and neuroimaging experiments.
Boris Gutkin
A three-year post-doctoral position in theoretical neuroscience is open to explore the mechanisms of interaction between interoceptive cardiac and exteroceptive tactile inputs at the cortical level. We aim to develop and validate a computational model of cardiac and of a somatosensory cortical circuit dynamics in order to determine the conditions under which interactions between exteroceptive and interoceptive inputs occur and which underlying mechanism (e.g., phase-resetting, gating, phasic arousal,..) best explain experimental data. The postdoctoral fellow will be based at the Group for Neural Theory at LNC2, in Boris Gutkin’s team with strong interactions with Catherine Tallon-Baudry’s team. LNC2 is located in the center of Paris within the Cognitive Science Department at Ecole Normale Supérieure, with numerous opportunities to interact with the Paris scientific community at large, in a stimulating and supportive work environment. Group for Neural Theory provides a rich environment and local community for theoretical neuroscience. Lab life is in English, speaking French is not a requirement. Salary according to experience and French rules. Starting date is first semester 2024.
N/A
A post-doctoral position in theoretical neuroscience is open to explore the impact of cardiac inputs on cortical dynamics. Understanding the role of internal states in human cognition has become a hot topic, with a wealth of experimental results but limited attempts at analyzing the computations that underlie the link between bodily organs and brain. Our particular focus is on elucidating how the different mechanisms for heart-to-cortex coupling (e.g., phase-resetting, gating, phasic arousal,..) can account for human behavioral and neural data, from somatosensory detection to more high-level concepts such as self-relevance, using data-based dynamical models.
Jorge Jaramillo
We are looking for an outstanding applicant to develop large-scale circuit models for decision making within a collaborative consortium that includes the Allen Institute for Neural Dynamics, New York University, and the University of Chicago. This ambitious NIH-funded project requires the creativity and expertise to integrate multimodal data sets (e.g., connectivity, large-scale neural recordings, behavior) into a comprehensive modeling framework. The successful applicant will join Jorge Jaramillo’s Distributed Neural Dynamics and Control Lab at the Grossman Center at the University of Chicago. Throughout the course of the postdoctoral training, there will be opportunities to visit the other sites in Seattle (Karel Svoboda) and New York (Adam Carter, Xiao-Jing Wang) for additional training and collaboration opportunities. Appointees will join as Grossman Center Postdoctoral Fellows at the University of Chicago and will have access to state-of-the-art facilities and additional opportunities for collaboration with exceptional experimental labs within the Department of Neurobiology, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and the Data Science Institute. Postdoctoral fellows will also have the possibility to work in additional projects with other Grossman Center faculty members.
Jorge Jaramillo
We are looking for an outstanding applicant to develop large-scale circuit models for decision making within a collaborative consortium that includes the Allen Institute for Neural Dynamics, New York University, and the University of Chicago. This ambitious NIH-funded project requires the creativity and expertise to integrate multimodal data sets (e.g., connectivity, large-scale neural recordings, behavior) into a comprehensive modeling framework. The successful applicant will join Jorge Jaramillo’s Distributed Neural Dynamics and Control Lab at the Grossman Center at the University of Chicago. Throughout the course of the postdoctoral training, there will be opportunities to visit the other sites in Seattle (Karel Svoboda) and New York (Adam Carter, Xiao-Jing Wang) for additional training and collaboration opportunities. Appointees will join as Grossman Center Postdoctoral Fellows at the University of Chicago and will have access to state-of-the-art facilities and additional opportunities for collaboration with exceptional experimental labs within the Department of Neurobiology, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and the Data Science Institute. Postdoctoral fellows will also have the possibility to work in additional projects with other Grossman Center faculty members.
N/A
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. We especially welcome applicants who develop mathematical approaches, computational models, and machine learning methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron’s lab investigates how the cellular and synaptic circuitry of neuronal circuits supports the complex dynamics and computations that are routinely observed in the brain. Jorge Jaramillo’s lab investigates how subcortical structures interact with cortical circuits to subserve cognitive processes such as memory, attention, and decision making. Ramon Nogueira’s lab investigates the geometry of representations as the computational support of cognitive processes like abstraction in noisy artificial and biological neural networks. Marcella Noorman’s lab investigates how properties of synapses, neurons, and circuits shape the neural dynamics that enable flexible and efficient computation. Samuel Muscinelli’s lab studies how the anatomy of brain circuits both governs learning and adapts to it. We combine analytical theory, machine learning, and data analysis, in close collaboration with experimentalists. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Neuroscience Institute, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and UChicago’s Data Science Institute. The Neuroscience Institute is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.
I-Chun Lin, PhD
The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning in brains and machines; inference, coding and neural dynamics; Bayesian and kernel methods, and deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The Unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical/computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment.
Lorenzo Fontolan
We are pleased to announce the opening of a PhD position at INMED (Aix-Marseille University) through the SCHADOC program, focused on the neural coding of social interactions and memory in the cortex of behaving mice. The project will investigate how social behaviors essential for cooperation, mating, and group dynamics are encoded in the brain, and how these processes are disrupted in neurodevelopmental disorders such as autism. This project uses longitudinal calcium imaging and population-level data analysis to study how cortical circuits encode social interactions in mice. Recordings from mPFC and S1 in wild-type and Neurod2 KO mice will be used to extract neural representations of social memory. The candidate will develop and apply computational models of neural dynamics and representational geometry to uncover how these codes evolve over time and are disrupted in social amnesia.
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Mapping the neural dynamics of dominance and defeat
Social experiences can have lasting changes on behavior and affective state. In particular, repeated wins and losses during fighting can facilitate and suppress future aggressive behavior, leading to persistent high aggression or low aggression states. We use a combination of techniques for multi-region neural recording, perturbation, behavioral analysis, and modeling to understand how nodes in the brain’s subcortical “social decision-making network” encode and transform aggressive motivation into action, and how these circuits change following social experience.
Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task
Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.
State-of-the-Art Spike Sorting with SpikeInterface
This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.
The Geometry of Decision-Making
Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
The role of sub-population structure in computations through neural dynamics
Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.
Motor contribution to auditory temporal predictions
Temporal predictions are fundamental instruments for facilitating sensory selection, allowing humans to exploit regularities in the world. Recent evidence indicates that the motor system instantiates predictive timing mechanisms, helping to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Accordingly, in the auditory domain auditory-motor interactions are observed during perception of speech and music, two temporally structured sensory streams. I will present a behavioral and neurophysiological account for this theory and will detail the parameters governing the emergence of this auditory-motor coupling, through a set of behavioral and magnetoencephalography (MEG) experiments.
Neural Dynamics of Cognitive Control
Cognitive control guides behavior by controlling what, where, and how information is represented in the brain. Perhaps the most well-studied form of cognitive control has been ‘attention’, which controls how external sensory stimuli are represented in the brain. In contrast, the neural mechanisms controlling the selection of representations held ‘in mind’, in working memory, are unknown. In this talk, I will present evidence that the prefrontal cortex controls working memory by selectively enhancing and transforming the contents of working memory. In particular, I will show how the neural representation of the content of working memory changes over time, moving between different ‘subspaces’ of the neural population. These dynamics may play a critical role in controlling what and how neural representations are acted upon.
Neural networks in the replica-mean field limits
In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.
Bridging the gap between artificial models and cortical circuits
Artificial neural networks simplify complex biological circuits into tractable models for computational exploration and experimentation. However, the simplification of artificial models also undermines their applicability to real brain dynamics. Typical efforts to address this mismatch add complexity to increasingly unwieldy models. Here, we take a different approach; by reducing the complexity of a biological cortical culture, we aim to distil the essential factors of neuronal dynamics and plasticity. We leverage recent advances in growing neurons from human induced pluripotent stem cells (hiPSCs) to analyse ex vivo cortical cultures with only two distinct excitatory and inhibitory neuron populations. Over 6 weeks of development, we record from thousands of neurons using high-density microelectrode arrays (HD-MEAs) that allow access to individual neurons and the broader population dynamics. We compare these dynamics to two-population artificial networks of single-compartment neurons with random sparse connections and show that they produce similar dynamics. Specifically, our model captures the firing and bursting statistics of the cultures. Moreover, tightly integrating models and cultures allows us to evaluate the impact of changing architectures over weeks of development, with and without external stimuli. Broadly, the use of simplified cortical cultures enables us to use the repertoire of theoretical neuroscience techniques established over the past decades on artificial network models. Our approach of deriving neural networks from human cells also allows us, for the first time, to directly compare neural dynamics of disease and control. We found that cultures e.g. from epilepsy patients tended to have increasingly more avalanches of synchronous activity over weeks of development, in contrast to the control cultures. Next, we will test possible interventions, in silico and in vitro, in a drive for personalised approaches to medical care. This work starts bridging an important theoretical-experimental neuroscience gap for advancing our understanding of mammalian neuron dynamics.
The role of population structure in computations through neural dynamics
Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.
A multi-level account of hippocampal function in concept learning from behavior to neurons
A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.
Parametric control of flexible timing through low-dimensional neural manifolds
Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.
Does human perception rely on probabilistic message passing?
The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.
A precise and adaptive neural mechanism for predictive temporal processing in the frontal cortex
The theory of predictive processing posits that the brain computes expectations to process information predictively. Empirical evidence in support of this theory, however, is scarce and largely limited to sensory areas. Here, we report a precise and adaptive mechanism in the frontal cortex of non-human primates consistent with predictive processing of temporal events. We found that the speed of neural dynamics is precisely adjusted according to the average time of an expected stimulus. This speed adjustment, in turn, enables neurons to encode stimuli in terms of deviations from expectation. This lawful relationship was evident across multiple experiments and held true during learning: when temporal statistics underwent covert changes, neural responses underwent predictable changes that reflected the new mean. Together, these results highlight a precise mathematical relationship between temporal statistics in the environment and neural activity in the frontal cortex that may serve as a mechanism for predictive temporal processing.
Neurocognitive mechanisms of proactive temporal attention: challenging oscillatory and cortico-centered models
To survive in a rapidly dynamic world, the brain predicts the future state of the world and proactively adjusts perception, attention and action. A key to efficient interaction is to predict and prepare to not only “where” and “what” things will happen, but also to “when”. I will present studies in healthy and neurological populations that investigated the cognitive architecture and neural basis of temporal anticipation. First, influential ‘entrainment’ models suggest that anticipation in rhythmic contexts, e.g. music or biological motion, uniquely relies on alignment of attentional oscillations to external rhythms. Using computational modeling and EEG, I will show that cortical neural patterns previously associated with entrainment in fact overlap with interval timing mechanisms that are used in aperiodic contexts. Second, temporal prediction and attention have commonly been associated with cortical circuits. Studying neurological populations with subcortical degeneration, I will present data that point to a double dissociation between rhythm- and interval-based prediction in the cerebellum and basal ganglia, respectively, and will demonstrate a role for the cerebellum in attentional control of perceptual sensitivity in time. Finally, using EEG in neurodegenerative patients, I will demonstrate that the cerebellum controls temporal adjustment of cortico-striatal neural dynamics, and use computational modeling to identify cerebellar-controlled neural parameters. Altogether, these findings reveal functionally and neural context-specificity and subcortical contributions to temporal anticipation, revising our understanding of dynamic cognition.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Deriving local synaptic learning rules for efficient representations in networks of spiking neurons
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under specific requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs. It also works in the presence of inhibitory transmission delays, where Hebbian-like plasticity typically fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.
Cognition is Rhythm
Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (>35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.
The Geometry of Decision-Making
Choosing among spatially distributed options is a central challenge for animals, from deciding among alternative potential food sources or refuges, to choosing with whom to associate. Here, using an integrated theoretical and experimental approach (employing immersive Virtual Reality), with both invertebrate and vertebrate models—the fruit fly, desert locust and zebrafish—we consider the recursive interplay between movement and collective vectorial integration in the brain during decision-making regarding options (potential ‘targets’) in space. We reveal that the brain repeatedly breaks multi-choice decisions into a series of abrupt (critical) binary decisions in space-time where organisms switch, spontaneously, from averaging vectorial information among, to suddenly excluding one of, the remaining options. This bifurcation process repeats until only one option—the one ultimately selected—remains. Close to each bifurcation the ‘susceptibility’ of the system exhibits a sharp increase, inevitably causing small differences among the remaining options to become amplified; a property that both comes ‘for free’ and is highly desirable for decision-making. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Neural dynamics of probabilistic information processing in humans and recurrent neural networks
In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.
Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software
Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.
Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
Measuring behavior to measure the brain
Animals produce behavior by responding to a mixture of cues that arise both externally (sensory) and internally (neural dynamics and states). These cues are continuously produced and can be combined in different ways depending on the needs of the animal. However, the integration of these external and internal cues remains difficult to understand in natural behaviors. To address this gap, we have developed an unsupervised method to identify internal states from behavioral data, and have applied it to the study of a dynamic social interaction. During courtship, Drosophila melanogaster males pattern their songs using cues from their partner. This sensory-driven behavior dynamically modulates courtship directed at their partner. We use our unsupervised method to identify how the animal integrates sensory information into distinct underlying states. We then use this to identify the role of courtship neurons in either integrating incoming information or directing the production of the song, roles that were previously hidden. Our results reveal how animals compose behavior from previously unidentified internal states, a necessary step for quantitative descriptions of animal behavior that link environmental cues, internal needs, neuronal activity, and motor outputs.
Capacitance clamp - artificial capacitance in biological neurons via dynamic clamp
A basic time scale in neural dynamics from single cells to the network level is the membrane time constant - set by a neuron’s input resistance and its capacitance. Interestingly, the membrane capacitance appears to be more dynamic than previously assumed with implications for neural function and pathology. Indeed, altered membrane capacitance has been observed in reaction to physiological changes like neural swelling, but also in ageing and Alzheimer's disease. Importantly, according to theory, even small changes of the capacitance can affect neuronal signal processing, e.g. increase network synchronization or facilitate transmission of high frequencies. In experiment, robust methods to modify the capacitance of a neuron have been missing. Here, we present the capacitance clamp - an electrophysiological method for capacitance control based on an unconventional application of the dynamic clamp. In its original form, dynamic clamp mimics additional synaptic or ionic conductances by injecting their respective currents. Whereas a conductance directly governs a current, the membrane capacitance determines how fast the voltage responds to a current. Accordingly, capacitance clamp mimics an altered capacitance by injecting a dynamic current that slows down or speeds up the voltage response (Fig 1 A). For the required dynamic current, the experimenter only has to specify the original cell and the desired target capacitance. In particular, capacitance clamp requires no detailed model of present conductances and thus can be applied in every excitable cell. To validate the capacitance clamp, we performed numerical simulations of the protocol and applied it to modify the capacitance of cultured neurons. First, we simulated capacitance clamp in conductance based neuron models and analysed impedance and firing frequency to verify the altered capacitance. Second, in dentate gyrus granule cells from rats, we could reliably control the capacitance in a range of 75 to 200% of the original capacitance and observed pronounced changes in the shape of the action potentials: increasing the capacitance reduced after-hyperpolarization amplitudes and slowed down repolarization. To conclude, we present a novel tool for electrophysiology: the capacitance clamp provides reliable control over the capacitance of a neuron and thereby opens a new way to study the temporal dynamics of excitable cells.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. As an example, we focus on the ability to execute learned actions in a reliable and stable manner. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. The stable latent dynamics, once identified, allows for the prediction of various behavioral features, using models whose parameters remain fixed throughout long timespans. We posit that latent cortical dynamics within the manifold are the fundamental and stable building blocks underlying consistent behavioral execution.
The neural dynamics of causal Inference across the cortical hierarchy
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands and maybe even tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics, and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.
Learning and the Origins of Consciousness: An Evolutionary Approach
Over the last fifteen years, Simona Ginsburg and I developed an evolutionary approach for studying basic consciousness, suggesting that the evolution of learning drove the evolutionary transition to from non-conscious to conscious animals. I present the rationale underlying this thesis, which has led to the identification of a capacity that we call the evolutionary transition marker, which, when we find evidence of it, we have evidence that the major evolutionary transition in which we are interested has gone to completion. I then put forward our proposal that the evolutionary marker of basic consciousness is a complex form of associative learning that we call unlimited associative learning (UAL), and that the evolution of this capacity drove the transition to consciousness. I discuss the implications of this thesis for questions pertaining to the neural dynamics that constitute conscious, to its taxonomic distribution and to the ecological context in which it first emerged. I end by pointing to some of the ways in which the relationship between UAL and consciousness can be experimentally tested in humans and in non-human animals.
Design principles of adaptable neural codes
Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.
Neural dynamics underlying temporal inference
Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous and is inadvertently transformed as it passes through the nervous system. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. We, however, know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals, and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. Thereafter, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation, and production of timing behaviors.
Mapping the neural dynamics of social dominance and defeat
Predictive processing in the macaque frontal cortex during time estimation
According to the theory of predictive processing, expectations modulate neural activity so as to optimize the processing of sensory inputs expected in the current environment. While there is accumulating evidence that the brain indeed operates under this principle, most of the attention has been placed on mechanisms that rely on static coding properties of neurons. The potential contribution of dynamical features, such as those reflected in the evolution of neural population dynamics, has thus far been overlooked. In this talk, I will present evidence for a novel mechanism for predictive processing in the temporal domain which relies on neural population dynamics. I will use recordings from the frontal cortex of macaques trained on a time interval reproduction task and show how neural dynamics can be directly related to animals’ temporal expectations, both in a stationary environment and during learning.
Linking dimensionality to computation in neural networks
The link between behavior, learning and the underlying connectome is a fundamental open problem in neuroscience. In my talk I will show how it is possible to develop a theory that bridges across these three levels (animal behavior, learning and network connectivity) based on the geometrical properties of neural activity. The central tool in my approach is the dimensionality of neural activity. I will link animal complex behavior to the geometry of neural representations, specifically their dimensionality; I will then show how learning shapes changes in such geometrical properties and how local connectivity properties can further regulate them. As a result, I will explain how the complexity of neural representations emerges from both behavioral demands (top-down approach) and learning or connectivity features (bottom-up approach). I will build these results regarding neural dynamics and representations starting from the analysis of neural recordings, by means of theoretical and computational tools that blend dynamical systems, artificial intelligence and statistical physics approaches.
Simons-Emory Workshop on Neural Dynamics: What could neural dynamics have to say about neural computation, and do we know how to listen?
Speakers will deliver focused 10-minute talks, with periods reserved for broader discussion on topics at the intersection of neural dynamics and computation. Organizer & Moderator: Chethan Pandarinath - Emory University and Georgia Tech Speakers & Discussants: Adrienne Fairhall - U Washington Mehrdad Jazayeri - MIT John Krakauer - John Hopkins Francesca Mastrogiuseppe - Gatsby / UCL Abigail Person - U Colorado Abigail Russo - Princeton Krishna Shenoy - Stanford Saurabh Vyas - Columbia
Low dimensional models and electrophysiological experiments to study neural dynamics in songbirds
Birdsong emerges when a set of highly interconnected brain areas manage to generate a complex output. The similarities between birdsong production and human speech have positioned songbirds as unique animal models for studying learning and production of this complex motor skill. In this work, we developed a low dimensional model for a neural network in which the variables were the average activities of different neural populations within the nuclei of the song system. This neural network is active during production, perception and learning of birdsong. We performed electrophysiological experiments to record neural activity from one of these nuclei and found that the low dimensional model could reproduce the neural dynamics observed during the experiments. Also, this model could reproduce the respiratory motor patterns used to generate song. We showed that sparse activity in one of the neural nuclei could drive a more complex activity downstream in the neural network. This interdisciplinary work shows how low dimensional neural models can be a valuable tool for studying the emergence of complex motor tasks
From oscillations to laminar responses - characterising the neural circuitry of autobiographical memories
Autobiographical memories are the ghosts of our past. Through them we visit places long departed, see faces once familiar, and hear voices now silent. These, often decades-old, personal experiences can be recalled on a whim or come unbidden into our everyday consciousness. Autobiographical memories are crucial to cognition because they facilitate almost everything we do, endow us with a sense of self and underwrite our capacity for autonomy. They are often compromised by common neurological and psychiatric pathologies with devastating effects. Despite autobiographical memories being central to everyday mental life, there is no agreed model of autobiographical memory retrieval, and we lack an understanding of the neural mechanisms involved. This precludes principled interventions to manage or alleviate memory deficits, and to test the efficacy of treatment regimens. This knowledge gap exists because autobiographical memories are challenging to study – they are immersive, multi-faceted, multi-modal, can stretch over long timescales and are grounded in the real world. One missing piece of the puzzle concerns the millisecond neural dynamics of autobiographical memory retrieval. Surprisingly, there are very few magnetoencephalography (MEG) studies examining such recall, despite the important insights this could offer into the activity and interactions of key brain regions such as the hippocampus and ventromedial prefrontal cortex. In this talk I will describe a series of MEG studies aimed at uncovering the neural circuitry underpinning the recollection of autobiographical memories, and how this changes as memories age. I will end by describing our progress on leveraging an exciting new technology – optically pumped MEG (OP-MEG) which, when combined with virtual reality, offers the opportunity to examine millisecond neural responses from the whole brain, including deep structures, while participants move within a virtual environment, with the attendant head motion and vestibular inputs.
Dynamically relevant motifs in inhibition-dominated networks
Many networks in the nervous system possess an abundance of inhibition, which serves to shape and stabilize neural dynamics. The neurons in such networks exhibit intricate patterns of connectivity whose structure controls the allowed patterns of neural activity. In this work, we examine inhibitory threshold-linear networks whose dynamics are constrained by an underlying directed graph. We develop a set of parameter-independent graph rules that enable us to predict features of the dynamics, such as emergent sequences and dynamic attractors, from properties of the graph. These rules provide a direct link between the structure and function of these networks, and may provide new insights into how connectivity shapes dynamics in real neural circuits.
Motor Cortical Control of Vocal Interactions in a Neotropical Singing Mouse
Using sounds for social interactions is common across many taxa. Humans engaged in conversation, for example, take rapid turns to go back and forth. This ability to act upon sensory information to generate a desired motor output is a fundamental feature of animal behavior. How the brain enables such flexible sensorimotor transformations, for example during vocal interactions, is a central question in neuroscience. Seeking a rodent model to fill this niche, we are investigating neural mechanisms of vocal interaction in Alston’s singing mouse (Scotinomys teguina) – a neotropical rodent native to the cloud forests of Central America. We discovered sub-second temporal coordination of advertisement songs (counter-singing) between males of this species – a behavior that requires the rapid modification of motor outputs in response to auditory cues. We leveraged this natural behavior to probe the neural mechanisms that generate and allow fast and flexible vocal communication. Using causal manipulations, we recently showed that an orofacial motor cortical area (OMC) in this rodent is required for vocal interactions (Okobi*, Banerjee* et. al, 2019). Subsequently, in electrophysiological recordings, I find neurons in OMC that track initiation, termination and relative timing of songs. Interestingly, persistent neural dynamics during song progression stretches or compresses on every trial to match the total song duration (Banerjee et al, in preparation). These results demonstrate robust cortical control of vocal timing in a rodent and upends the current dogma that motor cortical control of vocal output is evolutionarily restricted to the primate lineage.
Mapping the neural dynamics of social dominance and defeat
Rational thoughts in neural codes
First, we describe a new method for inferring the mental model of an animal performing a natural task. We use probabilistic methods to compute the most likely mental model based on an animal’s sensory observations and actions. This also reveals dynamic beliefs that would be optimal according to the animal’s internal model, and thus provides a practical notion of “rational thoughts.” Second, we construct a neural coding framework by which these rational thoughts, their computational dynamics, and actions can be identified within the manifold of neural activity. We illustrate the value of this approach by training an artificial neural network to perform a generalization of a widely used foraging task. We analyze the network’s behaviour to find rational thoughts, and successfully recover the neural properties that implemented those thoughts, providing a way of interpreting the complex neural dynamics of the artificial brain. Joint work with Zhengwei Wu, Minhae Kwon, Saurabh Daptardar, and Paul Schrater.
Working Memory 2.0
Working memory is the sketchpad of consciousness, the fundamental mechanism the brain uses to gain volitional control over its thoughts and actions. For the past 50 years, working memory has been thought to rely on cortical neurons that fire continuous impulses that keep thoughts “online”. However, new work from our lab has revealed more complex dynamics. The impulses fire sparsely and interact with brain rhythms of different frequencies. Higher frequency gamma (> 35 Hz) rhythms help carry the contents of working memory while lower frequency alpha/beta (~8-30 Hz) rhythms act as control signals that gate access to and clear out working memory. In other words, a rhythmic dance between brain rhythms may underlie your ability to control your own thoughts.
Recurrent network models of adaptive and maladaptive learning
During periods of persistent and inescapable stress, animals can switch from active to passive coping strategies to manage effort-expenditure. Such normally adaptive behavioural state transitions can become maladaptive in disorders such as depression. We developed a new class of multi-region recurrent neural network (RNN) models to infer brain-wide interactions driving such maladaptive behaviour. The models were trained to match experimental data across two levels simultaneously: brain-wide neural dynamics from 10-40,000 neurons and the realtime behaviour of the fish. Analysis of the trained RNN models revealed a specific change in inter-area connectivity between the habenula (Hb) and raphe nucleus during the transition into passivity. We then characterized the multi-region neural dynamics underlying this transition. Using the interaction weights derived from the RNN models, we calculated the input currents from different brain regions to each Hb neuron. We then computed neural manifolds spanning these input currents across all Hb neurons to define subspaces within the Hb activity that captured communication with each other brain region independently. At the onset of stress, there was an immediate response within the Hb/raphe subspace alone. However, RNN models identified no early or fast-timescale change in the strengths of interactions between these regions. As the animal lapsed into passivity, the responses within the Hb/raphe subspace decreased, accompanied by a concomitant change in the interactions between the raphe and Hb inferred from the RNN weights. This innovative combination of network modeling and neural dynamics analysis points to dual mechanisms with distinct timescales driving the behavioural state transition: early response to stress is mediated by reshaping the neural dynamics within a preserved network architecture, while long-term state changes correspond to altered connectivity between neural ensembles in distinct brain regions.
Effects of global inhibition on models of neural dynamics
Bernstein Conference 2024
Neural Dynamics of Memory Formation in the Primate Hippocampus
Bernstein Conference 2024
Coordinated cortico-cerebellar neural dynamics underlying neuroprosthetic learning
COSYNE 2022
Disentangling neural dynamics with fluctuating hidden Markov models
COSYNE 2022
Emergent behavior and neural dynamics in artificial agents tracking turbulent plumes
COSYNE 2022
Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior
COSYNE 2022
Linking neural dynamics across macaque V4, IT, and PFC to trial-by-trial object recognition behavior
COSYNE 2022
Disentangling input dynamics from intrinsic neural dynamics in modeling of neural-behavioral data
COSYNE 2023
Distinct neural dynamics in prefrontal and premotor cortex during decision-making
COSYNE 2023
Emergent compositional reasoning from recurrent neural dynamics
COSYNE 2023
The Exponential Family Variational Kalman Filter for Real-time Neural Dynamics
COSYNE 2023
Human Neural Dynamics of Elements in Natural Conversation – A Deep Learning Approach
COSYNE 2023
Large scale neural dynamics that govern normal and disrupted breathing
COSYNE 2023
Parsing neural dynamics with infinite recurrent switching linear dynamical systems
COSYNE 2023
Propofol anesthesia destabilizes neural dynamics across cortex
COSYNE 2023
Switching autoregressive low-rank tensor (SALT) models for neural dynamics
COSYNE 2023
Brain-wide neural dynamics accompanying fast goal-directed sensorimotor learning
COSYNE 2025
Brain-like neural dynamics for behavioral control develop through reinforcement learning
COSYNE 2025
Capturing condition dependence in neural dynamics with Gaussian process linear dynamical systems
COSYNE 2025
Composition of neural dynamics underlies distinct policy for navigation
COSYNE 2025
Conditional Diffusion Framework for Analyzing Neural Dynamics Across Multiple Contexts
COSYNE 2025
Enhancing the causal predictive power in recurrent network models of neural dynamics
COSYNE 2025
From Chaos to Coherence: Impact of High-Order Correlations on Neural Dynamics
COSYNE 2025
Linking genotypic variation to neural dynamics during dexterous reaching
COSYNE 2025
Mechanistic biases in data-constrained models of neural dynamics
COSYNE 2025
Discovering the individual differences in shared representations of neural dynamics and ethological behaviors
FENS Forum 2024
Hold your breath: A causal investigation of respiratory influence on neural dynamics
FENS Forum 2024
Inter-regional neural dynamics underlying self-paced action decisions
FENS Forum 2024
Interpretable representations of neural dynamics using geometric deep learning
FENS Forum 2024
An EEG investigation for individual differences in time perception: Unraveling neural dynamics through serial dependency
FENS Forum 2024
Neural bursts and firing regularity as convergent neural dynamics in globus pallidus for genetic dystonia syndromes
FENS Forum 2024
Neural dynamics of choice behavior: Influence of prior choices on basal ganglia-anterolateral motor cortex (ALM) circuitry and optogenetic modulation of the indirect pathway
FENS Forum 2024
The neural dynamics of complex maternal behavior
FENS Forum 2024
Neural dynamics during a visual attention and perception task in the superior colliculus
FENS Forum 2024
Neural dynamics of mood-influenced driving using fMRI: Connectivity patterns and speed variations
FENS Forum 2024
Neural dynamics of processing natural and digital emotional vocalizations
FENS Forum 2024
Neural dynamics and representational drift of inhibitory neurons in mouse auditory cortex
FENS Forum 2024
Neural dynamics underlying human vocalization
FENS Forum 2024
Preconfigured cortico-thalamic neural dynamics constrain movement-associated thalamic activity
FENS Forum 2024
Prefrontal-hippocampal neural dynamics underlying impulsive-like behavior in food addiction
FENS Forum 2024