← Back

Spiking Network

Topic spotlight
TopicWorld Wide

spiking network

Discover seminars, jobs, and research tagged with spiking network across World Wide.
42 curated items27 Seminars15 ePosters
Updated 7 months ago
42 items · spiking network
42 results
SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 22, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscience

Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task

Albert Compte
IDIBAPS
Nov 16, 2023

Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.

SeminarNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale
Princeton
May 2, 2023

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

SeminarNeuroscience

Meta-learning functional plasticity rules in neural networks

Tim Vogels
Institute of Science and Technology (IST), Klosterneuburg, Austria
Jan 17, 2023

Synaptic plasticity is known to be a key player in the brain’s life-long learning abilities. However, due to experimental limitations, the nature of the local changes at individual synapses and their link with emerging network-level computations remain unclear. I will present a numerical, meta-learning approach to deduce plasticity rules from either neuronal activity data and/or prior knowledge about the network's computation. I will first show how to recover known rules, given a human-designed loss function in rate networks, or directly from data, using an adversarial approach. Then I will present how to scale-up this approach to recurrent spiking networks using simulation-based inference.

SeminarNeuroscienceRecording

Neural networks in the replica-mean field limits

Thibaud Taillefumier
The University of Texas at Austin
Nov 29, 2022

In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.

SeminarNeuroscienceRecording

A biologically plausible inhibitory plasticity rule for world-model learning in SNNs

Z. Liao
Columbia
Nov 9, 2022

Memory consolidation is the process by which recent experiences are assimilated into long-term memory. In animals, this process requires the offline replay of sequences observed during online exploration in the hippocampus. Recent experimental work has found that salient but task-irrelevant stimuli are systematically excluded from these replay epochs, suggesting that replay samples from an abstracted model of the world, rather than verbatim previous experiences. We find that this phenomenon can be explained parsimoniously and biologically plausibly by a Hebbian spike time-dependent plasticity rule at inhibitory synapses. Using spiking networks at three levels of abstraction–leaky integrate-and-fire, biophysically detailed, and abstract binary–we show that this rule enables efficient inference of a model of the structure of the world. While plasticity has previously mainly been studied at excitatory synapses, we find that plasticity at excitatory synapses alone is insufficient to accomplish this type of structural learning. We present theoretical results in a simplified model showing that in the presence of Hebbian excitatory and inhibitory plasticity, the replayed sequences form a statistical estimator of a latent sequence, which converges asymptotically to the ground truth. Our work outlines a direct link between the synaptic and cognitive levels of memory consolidation, and highlights a potential conceptually distinct role for inhibition in computing with SNNs.

SeminarNeuroscienceRecording

Universal function approximation in balanced spiking networks through convex-concave boundary composition

W. F. Podlaski
Champalimaud
Nov 9, 2022

The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.

SeminarNeuroscienceRecording

Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing

A. Subramoney
University of Bochum
Nov 8, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.

SeminarNeuroscienceRecording

Nonlinear computations in spiking neural networks through multiplicative synapses

M. Nardin
IST Austria
Nov 8, 2022

The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.

SeminarNeuroscienceRecording

Trading Off Performance and Energy in Spiking Networks

Sander Keemink
Donders Institute for Brain, Cognition and Behaviour
May 31, 2022

Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.

SeminarNeuroscience

Multiscale modeling of brain states, from spiking networks to the whole brain

Alain Destexhe
Centre National de la Recherche Scientifique and Paris-Saclay University
Apr 5, 2022

Modeling brain mechanisms is often confined to a given scale, such as single-cell models, network models or whole-brain models, and it is often difficult to relate these models. Here, we show an approach to build models across scales, starting from the level of circuits to the whole brain. The key is the design of accurate population models derived from biophysical models of networks of excitatory and inhibitory neurons, using mean-field techniques. Such population models can be later integrated as units in large-scale networks defining entire brain areas or the whole brain. We illustrate this approach by the simulation of asynchronous and slow-wave states, from circuits to the whole brain. At the mesoscale (millimeters), these models account for travelling activity waves in cortex, and at the macroscale (centimeters), the models reproduce the synchrony of slow waves and their responsiveness to external stimuli. This approach can also be used to evaluate the impact of sub-cellular parameters, such as receptor types or membrane conductances, on the emergent behavior at the whole-brain level. This is illustrated with simulations of the effect of anesthetics. The program codes are open source and run in open-access platforms (such as EBRAINS).

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 22, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscienceRecording

Robustness in spiking networks: a geometric perspective

Christian Machens
Champalimaud Center, Lisboa
Feb 15, 2022

Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.

SeminarNeuroscienceRecording

Network mechanisms underlying representational drift in area CA1 of hippocampus

Alex Roxin
CRM, Barcelona
Feb 1, 2022

Recent chronic imaging experiments in mice have revealed that the hippocampal code exhibits non-trivial turnover dynamics over long time scales. Specifically, the subset of cells which are active on any given session in a familiar environment changes over the course of days and weeks. While some cells transition into or out of the code after a few sessions, others are stable over the entire experiment. The mechanisms underlying this turnover are unknown. Here we show that the statistics of turnover are consistent with a model in which non-spatial inputs to CA1 pyramidal cells readily undergo plasticity, while spatially tuned inputs are largely stable over time. The heterogeneity in stability across the cell assembly, as well as the decrease in correlation of the population vector of activity over time, are both quantitatively fit by a simple model with Gaussian input statistics. In fact, such input statistics emerge naturally in a network of spiking neurons operating in the fluctuation-driven regime. This correspondence allows one to map the parameters of a large-scale spiking network model of CA1 onto the simple statistical model, and thereby fit the experimental data quantitatively. Importantly, we show that the observed drift is entirely consistent with random, ongoing synaptic turnover. This synaptic turnover is, in turn, consistent with Hebbian plasticity related to continuous learning in a fast memory system.

SeminarNeuroscienceRecording

Theory of recurrent neural networks – from parameter inference to intrinsic timescales in spiking networks

Alexander van Meegen
Forschungszentrum Jülich
Jan 12, 2022
SeminarNeuroscienceRecording

NMC4 Short Talk: Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony

Scott Rich
Kremibl Brain Institute
Nov 30, 2021

A myriad of pathological changes associated with epilepsy, including the loss of specific cell types, improper expression of individual ion channels, and synaptic sprouting, can be recast as decreases in cell and circuit heterogeneity. In recent experimental work, we demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells, and past theoretical work has shown that neuronal heterogeneity improves a neural circuit’s ability to encode information. Viewed alongside the fact that seizure is an information-poor brain state, these findings motivate the hypothesis that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to seizure onset. By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We directly implement experimentally-obtained heterogeneity levels in cortical excitatory-inhibitory (E-I) stochastic spiking network models. Low heterogeneity networks display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis. Mean-field analysis reveals a distinct mathematical structure in these networks distinguished by multi-stability. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input-output response functions explains the counter-intuitive experimentally observed reduction in single-cell excitability in epileptogenic neurons. This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in mathematically characterized effects of heterogeneity. These interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure and a new lens through which to view epilepsy, the most common serious neurological disorder in the world, that could reveal new targets for clinical treatment.

SeminarNeuroscience

Homeostatic structural plasticity of neuronal connectivity triggered by optogenetic stimulation

Han Lu
Vlachos lab, University of Freiburg, Germany
Nov 24, 2021

Ever since Bliss and Lømo discovered the phenomenon of long-term potentiation (LTP) in rabbit dentate gyrus in the 1960s, Hebb’s rule—neurons that fire together wire together—gained popularity to explain learning and memory. Accumulating evidence, however, suggests that neural activity is homeostatically regulated. Homeostatic mechanisms are mostly interpreted to stabilize network dynamics. However, recent theoretical work has shown that linking the activity of a neuron to its connectivity within the network provides a robust alternative implementation of Hebb’s rule, although entirely based on negative feedback. In this setting, both natural and artificial stimulation of neurons can robustly trigger network rewiring. We used computational models of plastic networks to simulate the complex temporal dynamics of network rewiring in response to external stimuli. In parallel, we performed optogenetic stimulation experiments in the mouse anterior cingulate cortex (ACC) and subsequently analyzed the temporal profile of morphological changes in the stimulated tissue. Our results suggest that the new theoretical framework combining neural activity homeostasis and structural plasticity provides a consistent explanation of our experimental observations.

SeminarNeuroscience

Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models

Tatjana Tchumatcheko
University of Bonn
Nov 9, 2021

The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.

SeminarNeuroscience

The generation of cortical novelty responses through inhibitory plasticity

Nicholas Gale
University of Cambridge, DAMTP
Nov 2, 2021

Animals depend on fast and reliable detection of novel stimuli in their environment. Neurons in multiple sensory areas respond more strongly to novel in comparison to familiar stimuli. Yet, it remains unclear which circuit, cellular, and synaptic mechanisms underlie those responses. Here, we show that spike-timing-dependent plasticity of inhibitory-to-excitatory synapses generates novelty responses in a recurrent spiking network model. Inhibitory plasticity increases the inhibition onto excitatory neurons tuned to familiar stimuli, while inhibition for novel stimuli remains low, leading to a network novelty response. The generation of novelty responses does not depend on the periodicity but rather on the distribution of presented stimuli. By including tuning of inhibitory neurons, the network further captures stimulus-specific adaptation. Finally, we suggest that disinhibition can control the amplification of novelty responses. Therefore, inhibitory plasticity provides a flexible, biologically plausible mechanism to detect the novelty of bottom-up stimuli, enabling us to make experimentally testable predictions.

SeminarNeuroscienceRecording

Event-based Backpropagation for Exact Gradients in Spiking Neural Networks

Christian Pehle
Heidelberg University
Nov 2, 2021

Gradient-based optimization powered by the backpropagation algorithm proved to be the pivotal method in the training of non-spiking artificial neural networks. At the same time, spiking neural networks hold the promise for efficient processing of real-world sensory data by communicating using discrete events in continuous time. We derive the backpropagation algorithm for a recurrent network of spiking (leaky integrate-and-fire) neurons with hard thresholds and show that the backward dynamics amount to an event-based backpropagation of errors through time. Our derivation uses the jump conditions for partial derivatives at state discontinuities found by applying the implicit function theorem, allowing us to avoid approximations or substitutions. We find that the gradient exists and is finite almost everywhere in weight space, up to the null set where a membrane potential is precisely tangent to the threshold. Our presented algorithm, EventProp, computes the exact gradient with respect to a general loss function based on spike times and membrane potentials. Crucially, the algorithm allows for an event-based communication scheme in the backward phase, retaining the potential advantages of temporal sparsity afforded by spiking neural networks. We demonstrate the optimization of spiking networks using gradients computed via EventProp and the Yin-Yang and MNIST datasets with either a spike time-based or voltage-based loss function and report competitive performance. Our work supports the rigorous study of gradient-based optimization in spiking neural networks as well as the development of event-based neuromorphic architectures for the efficient training of spiking neural networks. While we consider the leaky integrate-and-fire model in this work, our methodology generalises to any neuron model defined as a hybrid dynamical system.

SeminarNeuroscienceRecording

Optimising spiking interneuron circuits for compartment-specific feedback

Henning Sprekeler
Technische Universität Berlin
Nov 1, 2021

Cortical circuits process information by rich recurrent interactions between excitatory neurons and inhibitory interneurons. One of the prime functions of interneurons is to stabilize the circuit by feedback inhibition, but the level of specificity on which inhibitory feedback operates is not fully resolved. We hypothesized that inhibitory circuits could enable separate feedback control loops for different synaptic input streams, by means of specific feedback inhibition to different neuronal compartments. To investigate this hypothesis, we adopted an optimization approach. Leveraging recent advances in training spiking network models, we optimized the connectivity and short-term plasticity of interneuron circuits for compartment-specific feedback inhibition onto pyramidal neurons. Over the course of the optimization, the interneurons diversified into two classes that resembled parvalbumin (PV) and somatostatin (SST) expressing interneurons. The resulting circuit can be understood as a neural decoder that inverts the nonlinear biophysical computations performed within the pyramidal cells. Our model provides a proof of concept for studying structure-function relations in cortical circuits by a combination of gradient-based optimization and biologically plausible phenomenological models

SeminarNeuroscienceRecording

Neural heterogeneity promotes robust learning

Dan Goodman
Imperial College London
Jan 21, 2021

The brain has a hugely diverse, heterogeneous structure. By contrast, many functional neural models are homogeneous. We compared the performance of spiking neural networks trained to carry out difficult tasks, with varying degrees of heterogeneity. Introducing heterogeneity in membrane and synapse time constants substantially improved task performance, and made learning more stable and robust across multiple training methods, particularly for tasks with a rich temporal structure. In addition, the distribution of time constants in the trained networks closely matches those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.

SeminarNeuroscienceRecording

Dimensions of variability in circuit models of cortex

Brent Doiron
The University of Chicago
Nov 15, 2020

Cortical circuits receive multiple inputs from upstream populations with non-overlapping stimulus tuning preferences. Both the feedforward and recurrent architectures of the receiving cortical layer will reflect this diverse input tuning. We study how population-wide neuronal variability propagates through a hierarchical cortical network receiving multiple, independent, tuned inputs. We present new analysis of in vivo neural data from the primate visual system showing that the number of latent variables (dimension) needed to describe population shared variability is smaller in V4 populations compared to those of its downstream visual area PFC. We successfully reproduce this dimensionality expansion from our V4 to PFC neural data using a multi-layer spiking network with structured, feedforward projections and recurrent assemblies of multiple, tuned neuron populations. We show that tuning-structured connectivity generates attractor dynamics within the recurrent PFC current, where attractor competition is reflected in the high dimensional shared variability across the population. Indeed, restricting the dimensionality analysis to activity from one attractor state recovers the low-dimensional structure inherited from each of our tuned inputs. Our model thus introduces a framework where high-dimensional cortical variability is understood as ``time-sharing’’ between distinct low-dimensional, tuning-specific circuit dynamics.

SeminarNeuroscienceRecording

The emergence of contrast invariance in cortical circuits

Tatjana Tchumatchenko
Max Planck Institute for Brain Research
Nov 12, 2020

Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.

SeminarNeuroscienceRecording

On temporal coding in spiking neural networks with alpha synaptic function

Iulia M. Comsa
Google Research Zürich, Switzerland
Aug 30, 2020

The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.

SeminarNeuroscienceRecording

Inferring Brain Rhythm Circuitry and Burstiness

Andre Longtin
University of Ottawa
Apr 14, 2020

Bursts in gamma and other frequency ranges are thought to contribute to the efficiency of working memory or communication tasks. Abnormalities in bursts have also been associated with motor and psychiatric disorders. The determinants of burst generation are not known, specifically how single cell and connectivity parameters influence burst statistics and the corresponding brain states. We first present a generic mathematical model for burst generation in an excitatory-inhibitory (EI) network with self-couplings. The resulting equations for the stochastic phase and envelope of the rhythm’s fluctuations are shown to depend on only two meta-parameters that combine all the network parameters. They allow us to identify different regimes of amplitude excursions, and to highlight the supportive role that network finite-size effects and noisy inputs to the EI network can have. We discuss how burst attributes, such as their durations and peak frequency content, depend on the network parameters. In practice, the problem above follows the a priori challenge of fitting such E-I spiking networks to single neuron or population data. Thus, the second part of the talk will discuss a novel method to fit mesoscale dynamics using single neuron data along with a low-dimensional, and hence statistically tractable, single neuron model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous ‘pools’ of neurons, and modelling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. We illustrate this approach using an E-I network of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived. We show that both single-neuron and connectivity parameters can be adequately recovered from simulated data.

ePoster

DelGrad: Exact gradients in spiking networks for learning transmission delays and weights

Julian Göltz, Jimmy Weber, Laura Kriener, Peter Lake, Melika Payvand, Mihai Petrovici

Bernstein Conference 2024

ePoster

Fitting recurrent spiking network models to study the interaction between cortical areas

COSYNE 2022

ePoster

Exploration of learning by dopamine D1 and D2 receptors by a spiking network model of the basal ganglia

COSYNE 2022

ePoster

Fitting recurrent spiking network models to study the interaction between cortical areas

COSYNE 2022

ePoster

Neuromodulation as a path along the model manifold for spiking networks

COSYNE 2022

ePoster

Neuromodulation as a path along the model manifold for spiking networks

COSYNE 2022

ePoster

Unifying mechanistic and functional models of cortical circuits with low-rank, E/I-balanced spiking networks

William Podlaski & Christian Machens

COSYNE 2023

ePoster

Dynamics of clustered spiking networks via the CTLN model

Caitlin Lienkaemper, Gabriel Ocker

COSYNE 2025

ePoster

An Efficient Multilayer Spiking Network as a Model of Ascending Pathways

Veronika Koren, Alan Emanuel, Stefano Panzeri

COSYNE 2025

ePoster

Homeostatic inhibitory plasticity enhances memory capacity and replay in spiking networks

Tomas Barta, Tomoki Fukai

COSYNE 2025

ePoster

Neural sampling in a balanced spiking network with internally generated variability

Xinruo Yang, Wenhao Zhang, Brent Doiron

COSYNE 2025

ePoster

Response functions disambiguate intrinsic vs. inherited criticality in spiking networks

Jacob Crosser, Braden A W Brinkman

COSYNE 2025

ePoster

A spike-by-spike account of dynamical computations on the latent manifolds of excitatory-inhibitory spiking networks

William Podlaski, Christian Machens

COSYNE 2025

ePoster

A game of memory: Learning in spiking networks with preserved weight distributions

Maayan Levy, Tim P. Vogels

FENS Forum 2024

ePoster

Memories by a thousand rules: Meta-learning plasticity rules for memory formation and recall in large spiking networks

Basile Confavreux, Poornima Ramesh, Pedro J. Gonçalves, Jakob H. Macke, Tim P. Vogels

FENS Forum 2024