← Back

Network Dynamics

Topic spotlight
TopicWorld Wide

network dynamics

Discover seminars, jobs, and research tagged with network dynamics across World Wide.
52 curated items31 Seminars17 ePosters4 Positions
Updated in 3 days
52 items · network dynamics
52 results
SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 9, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

Position

Dr Pedro Goncalves

Neuro-Electronics Flanders (NERF), Belgium
Leuven, Belgium
Dec 5, 2025

Our lab was recently founded at the Neuro-Electronics Flanders (NERF), Belgium. We are currently exploring a range of exciting topics at the intersection between computational neuroscience and probabilistic machine learning. In this context, we are looking for a PhD and a postdoc interested in using computational tools to investigate how the complex biophysics of single neurons gives rise to the dynamics of neural populations. These projects will make use of our previously developed machine learning methods, and depending on the selected candidates' interests, can involve further method development. More details about the positions and the lab can be found at http://jobs.vib.be/j/57438 http://jobs.vib.be/j/57439

PositionComputational Neuroscience

Prof Georges Debrégeas

Sorbonne Université
Paris, France
Dec 5, 2025

Zebrafish larva possesses a combination of assets – small dimensions, brain transparency, genetic tractability – which makes it a unique vertebrate model system to probe brain-scale neuronal dynamics. Using light-sheet microscopy, it is currently possible to monitor the activity of the entire brain at cellular resolution using functional calcium imaging, at about 1 full brain/second. The student will harness this unique opportunity to dissect the neural computation at play during sensory-driven navigation. 5-7 days old larvae will be partially restrained in agarose, i.e. with their tail free. Real-time video-monitoring of the tail beats will be used to infer virtual navigational parameters (displacement, reorientation); visual or thermal stimuli will be delivered to the larvae in a manner that will simulate a realistic navigation along light or thermal gradients. During this virtual sensory-driven navigation, the brain activity will be monitored using two-photon light-sheet functional imaging. These experiments will provide rich datasets of whole-brain activity during a complex sensorimotor task. The network dynamics will be analysed in order to extract a finite number of brain states associated with various motor programs. Starting from spontaneous navigation phases (i.e. absence of varying sensory cues), the student will analyse how different sensory cues interfere with the network endogenous dynamics to bias the probability of these different brain states and eventually favor movements along sensory gradients. For more information see: https://www.smartnets-etn.eu/whole-brain-network-dynamics-in-zebrafish-larvae-during-spontaneous-and-sensory-driven-virtual-navigation/

Position

Axel Hutt

INRIA
Strasbourg, France
Dec 5, 2025

The new research team NECTARINE at INRIA in Strasbourg / France aims to create a synergy between clinicians and mathematical researchers to develop new healthcare technologies. The team works on stochastic microscopic network models to describe macroscopic experimental data, such as behavior and/or encephalographic. They collaborate closely with clinicians and choose their research focus along the clinical applications. Major scientific objectives are stochastic multi-scale simulations and mean-field descriptions of neural activity on the macroscopic scale. Moreover, merging experimental data and numerical models by machine learning techniques is an additional objective. The team's clinical research focuses on neuromodulation of patients suffering from deficits in attention and temporal prediction. The team offers the possibility to apply for a permanent position as Chargé de Recherche (CR) or Directeur de Recherche (DR) in the research field of mathematical neuroscience with a strong focus on stochastic dynamics linking brain network modelling with experimental data.

SeminarNeuroscience

Dopaminergic Network Dynamics

Veronica Alvarez & Anders Borgkvist
National Institute of Mental Health resp Karolinska Institutet
Apr 24, 2025
SeminarNeuroscienceRecording

Neural Mechanisms of Subsecond Temporal Encoding in Primary Visual Cortex

Samuel Post
University of California, Riverside
Nov 28, 2023

Subsecond timing underlies nearly all sensory and motor activities across species and is critical to survival. While subsecond temporal information has been found across cortical and subcortical regions, it is unclear if it is generated locally and intrinsically or if it is a read out of a centralized clock-like mechanism. Indeed, mechanisms of subsecond timing at the circuit level are largely obscure. Primary sensory areas are well-suited to address these question as they have early access to sensory information and provide minimal processing to it: if temporal information is found in these regions, it is likely to be generated intrinsically and locally. We test this hypothesis by training mice to perform an audio-visual temporal pattern sensory discrimination task as we use 2-photon calcium imaging, a technique capable of recording population level activity at single cell resolution, to record activity in primary visual cortex (V1). We have found significant changes in network dynamics through mice’s learning of the task from naive to middle to expert levels. Changes in network dynamics and behavioral performance are well accounted for by an intrinsic model of timing in which the trajectory of q network through high dimensional state space represents temporal sensory information. Conversely, while we found evidence of other temporal encoding models, such as oscillatory activity, we did not find that they accounted for increased performance but were in fact correlated with the intrinsic model itself. These results provide insight into how subsecond temporal information is encoded mechanistically at the circuit level.

SeminarNeuroscience

Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task

Albert Compte
IDIBAPS
Nov 16, 2023

Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.

SeminarNeuroscience

Why “pauses” matter: breaks in respiratory behavior orchestrate piriform network dynamics

Lisa Roux
University of Bordeaux
Jun 18, 2023
SeminarNeuroscienceRecording

My evolution in invasive human neurophysiology: From basal ganglia single units to chronic electrocorticography; Therapies orchestrated by patients' own rhythms

Philip A. Starr, MD, PhD & Prof. Hayriye Cagnan, PhD
University of California, San Francisco, USA / University of Oxford, UK
Apr 26, 2023

On Thursday, April 27th, we will host Hayriye Cagnan and Philip A. Starr. Hayriye Cagnan, PhD, is an associate professor at the MRC Brain Network Dynamics Unit and University of Oxford. She will tell us about “Therapies orchestrated by patients’ own rhythms”. Philip A. Starr, MD, PhD, is a neurosurgeon and professor of Neurological Surgery at the University of California San Francisco. Besides his scientific presentation on “My evolution in invasive human neurophysiology: from basal ganglia single units to chronic electrocorticography”, he will give us a glimpse at the person behind the science. The talks will be followed by a shared discussion. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!

SeminarNeuroscienceRecording

The strongly recurrent regime of cortical networks

David Dahmen
Jülich Research Centre, Germany
Mar 28, 2023

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.

SeminarNeuroscienceRecording

Hippocampal network dynamics during impaired working memory in epileptic mice

Maryam Pasdarnavab
Ewell lab, University of Bonn
Jan 31, 2023

Memory impairment is a common cognitive deficit in temporal lobe epilepsy (TLE). The hippocampus is severely altered in TLE exhibiting multiple anatomical changes that lead to a hyperexcitable network capable of generating frequent epileptic discharges and seizures. In this study we investigated whether hippocampal involvement in epileptic activity drives working memory deficits using bilateral LFP recordings from CA1 during task performance. We discovered that epileptic mice experienced focal rhythmic discharges (FRDs) while they performed the spatial working memory task. Spatial correlation analysis revealed that FRDs were often spatially stable on the maze and were most common around reward zones (25 ‰) and delay zones (50 ‰). Memory performance was correlated with stability of FRDs, suggesting that spatially unstable FRDs interfere with working memory codes in real time.

SeminarNeuroscienceRecording

Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions

Kevin Berlemont
Wang Lab, NYU Center for Neural Science
Sep 20, 2022

Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.

SeminarNeuroscienceRecording

The Secret Bayesian Life of Ring Attractor Networks

Anna Kutschireiter
Spiden AG, Pfäffikon, Switzerland
Sep 6, 2022

Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.

SeminarNeuroscienceRecording

Online Training of Spiking Recurrent Neural Networks​ With Memristive Synapses

Yigit Demirag
Institute of Neuroinformatics
Jul 5, 2022

Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.

SeminarNeuroscienceRecording

Heterogeneity and non-random connectivity in reservoir computing

Abigail Morrison
Jülich Research Centre & RWTH Aachen University, Germany
May 31, 2022

Reservoir computing is a promising framework to study cortical computation, as it is based on continuous, online processing and the requirements and operating principles are compatible with cortical circuit dynamics. However, the framework has issues that limit its scope as a generic model for cortical processing. The most obvious of these is that, in traditional models, learning is restricted to the output projections and takes place in a fully supervised manner. If such an output layer is interpreted at face value as downstream computation, this is biologically questionable. If it is interpreted merely as a demonstration that the network can accurately represent the information, this immediately raises the question of what would be biologically plausible mechanisms for transmitting the information represented by a reservoir and incorporating it in downstream computations. Another major issue is that we have as yet only modest insight into how the structural and dynamical features of a network influence its computational capacity, which is necessary not only for gaining an understanding of those features in biological brains, but also for exploiting reservoir computing as a neuromorphic application. In this talk, I will first demonstrate a method for quantifying the representational capacity of reservoirs without training them on tasks. Based on this technique, which allows systematic comparison of systems, I then present our recent work towards understanding the roles of heterogeneity and connectivity patterns in enhancing both the computational properties of a network and its ability to reliably transmit to downstream networks. Finally, I will give a brief taster of our current efforts to apply the reservoir computing framework to magnetic systems as an approach to neuromorphic computing.

SeminarNeuroscience

Brain circuit dynamics in Action and Sleep

Gilles Laurent
Max Planck Institute for Brain Research, Frankfurt, Germany
Dec 2, 2021

Our group focuses on brain computation, physiology and evolution, with a particular focus on network dynamics, sleep (evolution and mechanistic underpinnings), cortical computation (through the study of ancestral cortices), and sensorimotor processing. This talk will describe our recent results on the remarkable camouflage behavior of cuttlefish (action) and on brain activity in REM and NonREM in lizards (sleep). Both topics will focus on aspects of circuit dynamics.

SeminarNeuroscience

Brain circuit dynamics in Action and Sleep

Gilles Laurent
Max Planck Institute for Brain Research, Frankfurt, Germany
Dec 1, 2021

Our group focuses on brain computation, physiology and evolution, with a particular focus on network dynamics, sleep (evolution and mechanistic underpinnings), cortical computation (through the study of ancestral cortices), and sensorimotor processing. This talk will describe our recent results on the remarkable camouflage behavior of cuttlefish (action) and on brain activity in REM and NonREM in lizards (sleep). Both topics will focus on aspects of circuit dynamics.

SeminarNeuroscience

Homeostatic structural plasticity of neuronal connectivity triggered by optogenetic stimulation

Han Lu
Vlachos lab, University of Freiburg, Germany
Nov 24, 2021

Ever since Bliss and Lømo discovered the phenomenon of long-term potentiation (LTP) in rabbit dentate gyrus in the 1960s, Hebb’s rule—neurons that fire together wire together—gained popularity to explain learning and memory. Accumulating evidence, however, suggests that neural activity is homeostatically regulated. Homeostatic mechanisms are mostly interpreted to stabilize network dynamics. However, recent theoretical work has shown that linking the activity of a neuron to its connectivity within the network provides a robust alternative implementation of Hebb’s rule, although entirely based on negative feedback. In this setting, both natural and artificial stimulation of neurons can robustly trigger network rewiring. We used computational models of plastic networks to simulate the complex temporal dynamics of network rewiring in response to external stimuli. In parallel, we performed optogenetic stimulation experiments in the mouse anterior cingulate cortex (ACC) and subsequently analyzed the temporal profile of morphological changes in the stimulated tissue. Our results suggest that the new theoretical framework combining neural activity homeostasis and structural plasticity provides a consistent explanation of our experimental observations.

SeminarNeuroscience

A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice

David Liu
University of Cambridge
Oct 26, 2021

Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.

SeminarNeuroscienceRecording

Network dynamics in the basal ganglia and possible implications for Parkinson’s disease

Jonathan Rubin
University of Pittsburgh
Oct 13, 2021

The basal ganglia are a collection of brain areas that are connected by a variety of synaptic pathways and are a site of significant reward-related dopamine release. These properties suggest a possible role for the basal ganglia in action selection, guided by reinforcement learning. In this talk, I will discuss a framework for how this function might be performed. I will also present some recent experimental results and theory that call for a re-evaluation of certain aspects of this framework. Next, I will turn to the changes in basal ganglia activity observed to occur with the dopamine depletion associated with Parkinson’s disease. I will discuss some of the potential functional implications of some of these changes and, if time permits, will conclude with some new results that focus on delta oscillations under dopamine depletion.

SeminarNeuroscience

Multi-scale synaptic analysis for psychiatric/emotional disorders

Akiko Hayashi-Takagi
RIKEN CBS
Jun 30, 2021

Dysregulation of emotional processing and its integration with cognitive functions are central features of many mental/emotional disorders associated both with externalizing problems (aggressive, antisocial behaviors) and internalizing problems (anxiety, depression). As Dr. Joseph LeDoux, our invited speaker of this program, wrote in his famous book “Synaptic self: How Our Brains Become Who We Are”—the brain’s synapses—are the channels through which we think, act, imagine, feel, and remember. Synapses encode the essence of personality, enabling each of us to function as a distinctive, integrated individual from moment to moment. Thus, exploring the functioning of synapses leads to the understanding of the mechanism of (patho)physiological function of our brain. In this context, we have investigated the pathophysiology of psychiatric disorders, with particular emphasis on the synaptic function of model mice of various psychiatric disorders such as schizophrenia, autism, depression, and PTSD. Our current interest is how synaptic inputs are integrated to generate the action potential. Because the spatiotemporal organization of neuronal firing is crucial for information processing, but how thousands of inputs to the dendritic spines drive the firing remains a central question in neuroscience. We identified a distinct pattern of synaptic integration in the disease-related models, in which extra-large (XL) spines generate NMDA spikes within these spines, which was sufficient to drive neuronal firing. We experimentally and theoretically observed that XL spines negatively correlated with working memory. Our work offers a whole new concept for dendritic computation and network dynamics, and the understanding of psychiatric research will be greatly reconsidered. The second half of my talk is the development of a novel synaptic tool. Because, no matter how beautifully we can illuminate the spine morphology and how accurately we can quantify the synaptic integration, the links between synapse and brain function remain correlational. In order to challenge the causal relationship between synapse and brain function, we established AS-PaRac1, which is unique not only because it can specifically label and manipulate the recently potentiated dendritic spine (Hayashi-Takagi et al, 2015, Nature). With use of AS-PaRac1, we developed an activity-dependent simultaneous labeling of the presynaptic bouton and the potentiated spines to establish “functional connectomics” in a synaptic resolution. When we apply this new imaging method for PTSD model mice, we identified a completely new functional neural circuit of brain region A→B→C with a very strong S/N in the PTSD model mice. This novel tool of “functional connectomics” and its photo-manipulation could open up new areas of emotional/psychiatric research, and by extension, shed light on the neural networks that determine who we are.

SeminarNeuroscience

Mapping of brain network dynamics at rest with EEG microstates

Christoph Michel
Department of Basic Neurosciences, Faculté de Médecine, Université de Genève, Campus Biotech
Jun 23, 2021
SeminarNeuroscienceRecording

Structures in space and time - Hierarchical network dynamics in the amygdala

Yael Bitterman
Luethi lab, FMI for Biomedical Research
Jun 15, 2021

In addition to its role in the learning and expression of conditioned behavior, the amygdala has long been implicated in the regulation of persistent states, such as anxiety and drive. Yet, it is not evident what projections of the neuronal activity capture the functional role of the network across such different timescales, specifically when behavior and neuronal space are complex and high-dimensional. We applied a data-driven dynamical approach for the analysis of calcium imaging data from the basolateral amygdala, collected while mice performed complex, self-paced behaviors, including spatial exploration, free social interaction, and goal directed actions. The seemingly complex network dynamics was effectively described by a hierarchical, modular structure, that corresponded to behavior on multiple timescales. Our results describe the response of the network activity to perturbations along different dimensions and the interplay between slow, state-like representation and the fast processing of specific events and actions schemes. We suggest hierarchical dynamical models offer a unified framework to capture the involvement of the amygdala in transitions between persistent states underlying such different functions as sensory associative learning, action selection and emotional processing. * Work done in collaboration with Jan Gründemann, Sol Fustinana, Alejandro Tsai and Julien Courtin (@theLüthiLab)

SeminarNeuroscience

Joining-the-dots in Memory

David Dupret
Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
Jun 13, 2021
SeminarNeuroscience

Handling multiple memories in the hippocampus network

David Dupret
Medical Research Council Brain Network Dynamics Unit, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
Jun 9, 2021
SeminarOpen SourceRecording

A macaque connectome for simulating large-scale network dynamics in The VirtualBrain

Kelly Shen
University of Toronto
Apr 29, 2021

TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.

SeminarNeuroscienceRecording

Recurrent network dynamics lead to interference in sequential learning

Friedrich Schuessler
Barak lab, Technion, Haifa, Israel
Apr 28, 2021

Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.

SeminarNeuroscience

All optical interrogation of developing GABAergic circuits in vivo

Rosa Cossart
Mediterranean Neurobiology Institute, Faculté de Médecine, Aix-Marseille Université, Marseille, France
Mar 16, 2021

The developmental journey of cortical interneurons encounters several activity-dependent milestones. During the early postnatal period in developing mice, GABAergic neurons are transient preferential recipients of thalamic inputs and undergo activity-dependent migration arrest, wiring and programmed cell-death. But cortical GABAergic neurons are also specified by very early developmental programs. For example, the earliest born GABAergic neurons develop into hub cells coordinating spontaneous activity in hippocampal slices. Despite their importance for the emergence of sensory experience, their role in coordinating network dynamics, and the role of activity in their integration into cortical networks, the collective in vivo dynamics of GABAergic neurons during the neonatal postnatal period remain unknown. Here, I will present data related to the coordinated activity between GABAergic cells of the mouse barrel cortex and hippocampus in non-anesthetized pups using the recent development of all optical methods to record and manipulate neuronal activity in vivo. I will show that the functional structure of developing GABAergic circuits is remarkably patterned, with segregated assemblies of prospective parvalbumin neurons and highly connected hub cells, both shaped by sensory-dependent processes.

SeminarNeuroscience

Slow global population dynamics propagating through the medial entorhinal cortex

Soledad Gonzalo Cogno
Moser lab, NTNU
Jan 26, 2021

The medial entorhinal cortex (MEC) supports the brain’s representation of space with distinct cell types whose firing is tuned to features of the environment (grid, border, and object-vector cells) or navigation (head-direction and speed cells). While the firing properties of these functionally-distinct cell types are well characterized, how they interact with one another remains unknown. To determine how activity self-organizes in the MEC network, we tested mice in a spontaneous locomotion task under sensory-deprived conditions. Using 2-photon calcium imaging, we monitored the activity of large populations of MEC neurons in head-fixed mice running on a wheel in darkness, in the absence of external sensory feedback tuned to navigation. We unveiled the presence of motifs that involve the sequential activation of cells in layer II of MEC (MEC-L2). We call these motifs waves. Waves lasted tens of seconds to minutes, were robust, swept through the entire network of active cells and did not exhibit any anatomical organization. Furthermore, waves did not map the position of the mouse on the wheel and were not restricted to running epochs. The majority of MEC-L2 neurons participate in this global sequential dynamics, that ties all functional cell types together. We found the waves in the most lateral region of MEC, but not in adjacent areas such as PaS or in a sensory cortex such as V1.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 24, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

ePoster

Identifying the impact of local connectivity features on network dynamics

Yuxiu Shao, David Dahmen, Stefano Recanatesi, Eric Shea-Brow, Srdjan Ostojic

Bernstein Conference 2024

ePoster

Influence of Collective Network Dynamics on Stimulus Separation

Lars Schutzeichel, Jan Bauer, Peter Bouss, David Dahmen, Simon Musall, Moritz Helias

Bernstein Conference 2024

ePoster

Short-Distance Connections Enhance Neural Network Dynamics

Mohmmad Sharif Hussainyar, Dong Li, Claus Hilgetag

Bernstein Conference 2024

ePoster

The shared geometry of biological and recurrent neural network dynamics

Arthur Pellegrino & Angus Chadwick

COSYNE 2023

ePoster

Hidden synaptic structures control collective network dynamics

Lorenzo Tiberi, David Dahmen, Moritz Helias

COSYNE 2023

ePoster

Network dynamics implement optimal inference in a flexible timing task

John Schwarcz, Eran Lottem, Jonathan Kadmon

COSYNE 2023

ePoster

Neural network dynamics underlying context-dependent perceptual decision making

Yuxiu Shao, Srdjan Ostojic, Manuel Molano-Mazon, Ainhoa Hermoso-Mendizabal, Lejla Bektic, Jaime de la Rocha

COSYNE 2023

ePoster

Understanding network dynamics of compact assemblies of neurons in zebrafish larvae optic tectum during spontaneous activation

Nicole Sanderson, Carina Curto, Enrique Hansen, Germán Sumbre

COSYNE 2023

ePoster

Rapid learning of nonlinear network dynamics via dopamine-gated non-Hebbian plasticity

Rich Pang, Jonathan Pillow

COSYNE 2025

ePoster

Unraveling the effects of different pruning rules on network dynamics

Vidit Tripathi, Alex Wang, Hannah Choi

COSYNE 2025

ePoster

Adult cortical and hippocampal network dynamics in p.A263V Scn2a mouse model of developmental and epileptic encephalopathy

Yana Reva, Katharina Ulrich, Hanna Oelßner, Birgit Engeland, Ricardo Melo Neves, Stephan Marguet, Dirk Isbrandt

FENS Forum 2024

ePoster

Effects of progressive dopamine loss on movement cessation and initiation: Insights into basal ganglia network dynamics from a genetic model of Parkinson’s disease

Selena Gonzalez, Ziyang Xiao, Scott Kooiman, Li Yuan, Li Yao, Byung Kook Lim, Jill Leutgeb, Stefan Leutgeb

FENS Forum 2024

ePoster

Emergence of NMDA-spikes: Unraveling network dynamics in pyramidal neurons

Michael Dick, Joshua Böttcher, David Dahmen, Willem Wybo, Abigail Morrison

FENS Forum 2024

ePoster

The importance of high-density microelectrode arrays for recording multi-scale extracellular potential and label-free characterization of network dynamics in iPSC-derived neurons

Zhuoliang (Ed) Li, Francesco Modena, Elvira Guella, Anastasiia Oryshchuk, Laura D’Ignazio, Praveena Manogaran, Marie Obien

FENS Forum 2024

ePoster

Investigating the impact of OPHN1 on olfactory behaviour and neuronal network dynamics

Antonio di Soccio, Marco Brondi, Marica Albanesi, Billuart Pierre, Claudia Lodovichi

FENS Forum 2024

ePoster

Longitudinal assessment of ALS patient-derived motor neurons reveals altered network dynamics and synaptic impairment

Anna Mikalsen Kollstroem, Nicholas Christiansen, Axel Sandvig, Ioanna Sandvig

FENS Forum 2024

ePoster

Modulation of prefrontal cortex network dynamics: A pivotal role of serotonin?

Juliana Gross, Olivia A. Masseck

FENS Forum 2024