← Back

Spiking Networks

Topic spotlight
TopicWorld Wide

spiking networks

Discover seminars, jobs, and research tagged with spiking networks across World Wide.
28 curated items16 Seminars10 ePosters2 Positions
Updated 1 day ago
28 items · spiking networks
28 results
Position

Tim Vogels

IST, Austria
Vienna, Austria
Dec 5, 2025

TL;DR: If you liked our last NeurIPS paper https://www.biorxiv.org/content/10.1101/2020.10.24.353409v1 and you think you can contribute and imagine, oh, the places we’ll go from here,...from one of the most loivable cities in the world, … DO apply. --- We are looking for at least two scientists to join the vogelslab.org as postdocs at the @ISTAustria near Vienna. The successful candidate will join an ongoing ERC funded project to discover families of spike-induced synaptic plasticity rules by way of numerical derivation. Together, we will define and explore search spaces of biologically plausible plasticity rules expressed e.g., as polynomial expansions or multi layer perceptrons. We aim to compare the results to various experimental and theoretical data sets, including functional spiking network models, human stem-cell derived neural network cultures, in vitro and in vivo experimental data. We are looking for someone who can expand and develop our multipurpose modular library dedicated to optimization of non-differentiable systems. Due to the modularity of the library, the candidate will have extensive freedom regarding which optimization techniques to use and what to learn in which systems, but being a team player will be a crucial skill. Depending on your (flexible, possibly immediate) starting date we can offer up to 4-year contracts with competitive salaries, benefits, vacation time and ample budget for materials and travel in a tranquil and inspiring environment. The Vogelslab, and the IST Austria is located in Klosterneuburg, a historic city northwest of Vienna. The campus is located in the middle of the beautiful landscape of the Vienna Woods, 30 minutes from downtown Vienna, the capital of Austria that consistently scores in the top cities of the world for its high standard of living. If you are interested, send an email with your application to Jessica . deAntonio [at] ist.ac.at. Your application should include your CV, your most relevant publication, contact information for 2 or more references, and a cover letter with - a short description of you & your career - a brief discussion of what you think is the greatest weakness of the above mentioned NeurIPS paper (and maybe how you would go about fixing it). We are looking to build a diverse and interesting environment, so if you bring any qualities that make our lab (and computational neuroscience at large) more diverse than it is right now, please consider applying. We will begin to evaluate applications on the 31st of April and aim to get back to you with a decision within 5 weeks of your application.

SeminarNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale
Princeton
May 2, 2023

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

SeminarNeuroscienceRecording

A biologically plausible inhibitory plasticity rule for world-model learning in SNNs

Z. Liao
Columbia
Nov 9, 2022

Memory consolidation is the process by which recent experiences are assimilated into long-term memory. In animals, this process requires the offline replay of sequences observed during online exploration in the hippocampus. Recent experimental work has found that salient but task-irrelevant stimuli are systematically excluded from these replay epochs, suggesting that replay samples from an abstracted model of the world, rather than verbatim previous experiences. We find that this phenomenon can be explained parsimoniously and biologically plausibly by a Hebbian spike time-dependent plasticity rule at inhibitory synapses. Using spiking networks at three levels of abstraction–leaky integrate-and-fire, biophysically detailed, and abstract binary–we show that this rule enables efficient inference of a model of the structure of the world. While plasticity has previously mainly been studied at excitatory synapses, we find that plasticity at excitatory synapses alone is insufficient to accomplish this type of structural learning. We present theoretical results in a simplified model showing that in the presence of Hebbian excitatory and inhibitory plasticity, the replayed sequences form a statistical estimator of a latent sequence, which converges asymptotically to the ground truth. Our work outlines a direct link between the synaptic and cognitive levels of memory consolidation, and highlights a potential conceptually distinct role for inhibition in computing with SNNs.

SeminarNeuroscienceRecording

Universal function approximation in balanced spiking networks through convex-concave boundary composition

W. F. Podlaski
Champalimaud
Nov 9, 2022

The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.

SeminarNeuroscienceRecording

Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing

A. Subramoney
University of Bochum
Nov 8, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.

SeminarNeuroscienceRecording

Nonlinear computations in spiking neural networks through multiplicative synapses

M. Nardin
IST Austria
Nov 8, 2022

The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.

SeminarNeuroscienceRecording

Trading Off Performance and Energy in Spiking Networks

Sander Keemink
Donders Institute for Brain, Cognition and Behaviour
May 31, 2022

Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.

SeminarNeuroscience

Multiscale modeling of brain states, from spiking networks to the whole brain

Alain Destexhe
Centre National de la Recherche Scientifique and Paris-Saclay University
Apr 5, 2022

Modeling brain mechanisms is often confined to a given scale, such as single-cell models, network models or whole-brain models, and it is often difficult to relate these models. Here, we show an approach to build models across scales, starting from the level of circuits to the whole brain. The key is the design of accurate population models derived from biophysical models of networks of excitatory and inhibitory neurons, using mean-field techniques. Such population models can be later integrated as units in large-scale networks defining entire brain areas or the whole brain. We illustrate this approach by the simulation of asynchronous and slow-wave states, from circuits to the whole brain. At the mesoscale (millimeters), these models account for travelling activity waves in cortex, and at the macroscale (centimeters), the models reproduce the synchrony of slow waves and their responsiveness to external stimuli. This approach can also be used to evaluate the impact of sub-cellular parameters, such as receptor types or membrane conductances, on the emergent behavior at the whole-brain level. This is illustrated with simulations of the effect of anesthetics. The program codes are open source and run in open-access platforms (such as EBRAINS).

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 22, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscienceRecording

Robustness in spiking networks: a geometric perspective

Christian Machens
Champalimaud Center, Lisboa
Feb 15, 2022

Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.

SeminarNeuroscienceRecording

Theory of recurrent neural networks – from parameter inference to intrinsic timescales in spiking networks

Alexander van Meegen
Forschungszentrum Jülich
Jan 12, 2022
SeminarNeuroscienceRecording

Event-based Backpropagation for Exact Gradients in Spiking Neural Networks

Christian Pehle
Heidelberg University
Nov 2, 2021

Gradient-based optimization powered by the backpropagation algorithm proved to be the pivotal method in the training of non-spiking artificial neural networks. At the same time, spiking neural networks hold the promise for efficient processing of real-world sensory data by communicating using discrete events in continuous time. We derive the backpropagation algorithm for a recurrent network of spiking (leaky integrate-and-fire) neurons with hard thresholds and show that the backward dynamics amount to an event-based backpropagation of errors through time. Our derivation uses the jump conditions for partial derivatives at state discontinuities found by applying the implicit function theorem, allowing us to avoid approximations or substitutions. We find that the gradient exists and is finite almost everywhere in weight space, up to the null set where a membrane potential is precisely tangent to the threshold. Our presented algorithm, EventProp, computes the exact gradient with respect to a general loss function based on spike times and membrane potentials. Crucially, the algorithm allows for an event-based communication scheme in the backward phase, retaining the potential advantages of temporal sparsity afforded by spiking neural networks. We demonstrate the optimization of spiking networks using gradients computed via EventProp and the Yin-Yang and MNIST datasets with either a spike time-based or voltage-based loss function and report competitive performance. Our work supports the rigorous study of gradient-based optimization in spiking neural networks as well as the development of event-based neuromorphic architectures for the efficient training of spiking neural networks. While we consider the leaky integrate-and-fire model in this work, our methodology generalises to any neuron model defined as a hybrid dynamical system.

SeminarNeuroscienceRecording

Neural heterogeneity promotes robust learning

Dan Goodman
Imperial College London
Jan 21, 2021

The brain has a hugely diverse, heterogeneous structure. By contrast, many functional neural models are homogeneous. We compared the performance of spiking neural networks trained to carry out difficult tasks, with varying degrees of heterogeneity. Introducing heterogeneity in membrane and synapse time constants substantially improved task performance, and made learning more stable and robust across multiple training methods, particularly for tasks with a rich temporal structure. In addition, the distribution of time constants in the trained networks closely matches those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.

SeminarNeuroscienceRecording

On temporal coding in spiking neural networks with alpha synaptic function

Iulia M. Comsa
Google Research Zürich, Switzerland
Aug 30, 2020

The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.

SeminarNeuroscienceRecording

Inferring Brain Rhythm Circuitry and Burstiness

Andre Longtin
University of Ottawa
Apr 14, 2020

Bursts in gamma and other frequency ranges are thought to contribute to the efficiency of working memory or communication tasks. Abnormalities in bursts have also been associated with motor and psychiatric disorders. The determinants of burst generation are not known, specifically how single cell and connectivity parameters influence burst statistics and the corresponding brain states. We first present a generic mathematical model for burst generation in an excitatory-inhibitory (EI) network with self-couplings. The resulting equations for the stochastic phase and envelope of the rhythm’s fluctuations are shown to depend on only two meta-parameters that combine all the network parameters. They allow us to identify different regimes of amplitude excursions, and to highlight the supportive role that network finite-size effects and noisy inputs to the EI network can have. We discuss how burst attributes, such as their durations and peak frequency content, depend on the network parameters. In practice, the problem above follows the a priori challenge of fitting such E-I spiking networks to single neuron or population data. Thus, the second part of the talk will discuss a novel method to fit mesoscale dynamics using single neuron data along with a low-dimensional, and hence statistically tractable, single neuron model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous ‘pools’ of neurons, and modelling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. We illustrate this approach using an E-I network of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived. We show that both single-neuron and connectivity parameters can be adequately recovered from simulated data.

ePoster

DelGrad: Exact gradients in spiking networks for learning transmission delays and weights

Julian Göltz, Jimmy Weber, Laura Kriener, Peter Lake, Melika Payvand, Mihai Petrovici

Bernstein Conference 2024

ePoster

Neuromodulation as a path along the model manifold for spiking networks

COSYNE 2022

ePoster

Neuromodulation as a path along the model manifold for spiking networks

COSYNE 2022

ePoster

Unifying mechanistic and functional models of cortical circuits with low-rank, E/I-balanced spiking networks

William Podlaski & Christian Machens

COSYNE 2023

ePoster

Dynamics of clustered spiking networks via the CTLN model

Caitlin Lienkaemper, Gabriel Ocker

COSYNE 2025

ePoster

Homeostatic inhibitory plasticity enhances memory capacity and replay in spiking networks

Tomas Barta, Tomoki Fukai

COSYNE 2025

ePoster

Response functions disambiguate intrinsic vs. inherited criticality in spiking networks

Jacob Crosser, Braden A W Brinkman

COSYNE 2025

ePoster

A spike-by-spike account of dynamical computations on the latent manifolds of excitatory-inhibitory spiking networks

William Podlaski, Christian Machens

COSYNE 2025

ePoster

A game of memory: Learning in spiking networks with preserved weight distributions

Maayan Levy, Tim P. Vogels

FENS Forum 2024

ePoster

Memories by a thousand rules: Meta-learning plasticity rules for memory formation and recall in large spiking networks

Basile Confavreux, Poornima Ramesh, Pedro J. Gonçalves, Jakob H. Macke, Tim P. Vogels

FENS Forum 2024