← Back

Spiking Neurons

Topic spotlight
TopicWorld Wide

spiking neurons

Discover seminars, jobs, and research tagged with spiking neurons across World Wide.
24 curated items15 Seminars7 ePosters2 Positions
Updated 1 day ago
24 items · spiking neurons
24 results
PositionNeuroscience

Prof. Jean-Pascal Pfister

University of Bern
University of Bern, 5 Bühlplatz, CH-3012 Bern, CH
Dec 5, 2025

The project aims at answering an almost 100 year old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels. The PhD students and post-doc will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern).

PositionNeuroscience

Jean-Pascal Pfister

Theoretical Neuroscience Group, Department of Physiology, University of Bern
University of Bern, Bühlplatz 5, 3012 Bern, Switzerland
Dec 5, 2025

The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.

SeminarNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale
Princeton
May 2, 2023

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

SeminarNeuroscience

From spikes to factors: understanding large-scale neural computations

Mark M. Churchland
Columbia University, New York, USA
Apr 5, 2023

It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.

SeminarNeuroscienceRecording

Training Dynamic Spiking Neural Network via Forward Propagation Through Time

B. Yin
CWI
Nov 9, 2022

With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.

SeminarNeuroscienceRecording

Nonlinear computations in spiking neural networks through multiplicative synapses

M. Nardin
IST Austria
Nov 8, 2022

The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.

SeminarNeuroscienceRecording

Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics

Robert Guetig
Charité – Universitätsmedizin Berlin & BIH
Mar 8, 2022

Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.

SeminarNeuroscienceRecording

Network mechanisms underlying representational drift in area CA1 of hippocampus

Alex Roxin
CRM, Barcelona
Feb 1, 2022

Recent chronic imaging experiments in mice have revealed that the hippocampal code exhibits non-trivial turnover dynamics over long time scales. Specifically, the subset of cells which are active on any given session in a familiar environment changes over the course of days and weeks. While some cells transition into or out of the code after a few sessions, others are stable over the entire experiment. The mechanisms underlying this turnover are unknown. Here we show that the statistics of turnover are consistent with a model in which non-spatial inputs to CA1 pyramidal cells readily undergo plasticity, while spatially tuned inputs are largely stable over time. The heterogeneity in stability across the cell assembly, as well as the decrease in correlation of the population vector of activity over time, are both quantitatively fit by a simple model with Gaussian input statistics. In fact, such input statistics emerge naturally in a network of spiking neurons operating in the fluctuation-driven regime. This correspondence allows one to map the parameters of a large-scale spiking network model of CA1 onto the simple statistical model, and thereby fit the experimental data quantitatively. Importantly, we show that the observed drift is entirely consistent with random, ongoing synaptic turnover. This synaptic turnover is, in turn, consistent with Hebbian plasticity related to continuous learning in a fast memory system.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 1, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Short Talk: An optogenetic theory of stimulation near criticality

Brandon Benson
Stanford University
Dec 1, 2021

Recent advances in optogenetics allow for stimulation of neurons with sub-millisecond spike jitter and single neuron selectivity. Already this precision has revealed new levels of cortical sensitivity: stimulating tens of neurons can yield changes in the mean firing rate of thousands of similarly tuned neurons. This extreme sensitivity suggests that cortical dynamics are near criticality. Criticality is often studied in neural systems as a non-equilibrium thermodynamic process in which scale-free patterns of activity, called avalanches, emerge between distinct states of spontaneous activity. While criticality is well studied, it is still unclear what these distinct states of spontaneous activity are and what responses we expect from stimulation of this activity. By answering these questions, optogenetic stimulation will become a new avenue for approaching criticality and understanding cortical dynamics. Here, for the first time, we study the effects of optogenetic-like stimulation on a model near criticality. We study a model of Inhibitory/Excitatory (I/E) Leaky Integrate and Fire (LIF) spiking neurons which display a region of high sensitivity as seen in experiments. We find that this region of sensitivity is, indeed, near criticality. We derive the Dynamic Mean Field Theory of this model and find that the distinct states of activity are asynchrony and synchrony. We use our theory to characterize response to various types and strengths of optogenetic stimulation. Our model and theory predict that asynchronous, near-critical dynamics can have two qualitatively different responses to stimulation: one characterized by high sensitivity, discrete event responses, and high trial-to-trial variability, and another characterized by low sensitivity, continuous responses with characteristic frequencies, and low trial-to-trial variability. While both response types may be considered near-critical in model space, networks which are closest to criticality show a hybrid of these response effects.

SeminarNeuroscience

Representation transfer and signal denoising through topographic modularity

Barna Zajzon
Morrison lab, Forschungszentrum Jülich, Germany
Nov 3, 2021

To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.

SeminarNeuroscienceRecording

Deriving local synaptic learning rules for efficient representations in networks of spiking neurons

Viola Priesemann
Max Planck Institute for Dynamics and Self-Organization
Nov 1, 2021

How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under specific requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs. It also works in the presence of inhibitory transmission delays, where Hebbian-like plasticity typically fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.

SeminarNeuroscienceRecording

Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits

Richard Naud
University of Ottawa
Aug 31, 2020

Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well-established that it depends on pre and postsynaptic activity. However, models that rely solely on pre and postsynaptic activity for synaptic changes have, to date, not been able to account for learning complex tasks that demand hierarchical networks. Here, we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then neurons higher in the hierarchy can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites, and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses, and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.

SeminarNeuroscienceRecording

E-prop: A biologically inspired paradigm for learning in recurrent networks of spiking neurons

Franz Scherr
Technische Universität Graz
Aug 30, 2020

Transformative advances in deep learning, such as deep reinforcement learning, usually rely on gradient-based learning methods such as backpropagation through time (BPTT) as a core learning algorithm. However, BPTT is not argued to be biologically plausible, since it requires to a propagate gradients backwards in time and across neurons. Here, we propose e-prop, a novel gradient-based learning method with local and online weight update rules for recurrent neural networks, and in particular recurrent spiking neural networks (RSNNs). As a result, e-prop has the potential to provide a substantial fraction of the power of deep learning to RSNNs. In this presentation, we will motivate e-prop from the perspective of recent insights in neuroscience and show how these have to be combined to form an algorithm for online gradient descent. The mathematical results will be supported by empirical evidence in supervised and reinforcement learning tasks. We will also discuss how limitations that are inherited from gradient-based learning methods, such as sample-efficiency, can be addressed by considering an evolution-like optimization that enhances learning on particular task families. The emerging learning architecture can be used to learn tasks by a single demonstration, hence enabling one-shot learning.

SeminarNeuroscienceRecording

Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks

Sander Bohte
Centrum Wiskunde & Informatica, Amsterdam
Aug 30, 2020

The emergence of brain-inspired neuromorphic computing as a paradigm for edge AI is motivating the search for high-performance and efficient spiking neural networks to run on this hardware. However, compared to classical neural networks in deep learning, current spiking neural networks lack competitive performance in compelling areas. Here, for sequential and streaming tasks, we demonstrate how spiking recurrent neural networks (SRNN) using adaptive spiking neurons are able to achieve state-of-the-art performance compared to other spiking neural networks and almost reach or exceed the performance of classical recurrent neural networks (RNNs) while exhibiting sparse activity. From this, we calculate a 100x energy improvement for our SRNNs over classical RNNs on the harder tasks. We find in particular that adapting the timescales of spiking neurons is crucial for achieving such performance, and we demonstrate the performance for SRNNs for different spiking neuron models.

SeminarNeuroscienceRecording

Mean-field models for finite-size populations of spiking neurons

Tilo Schwalger
TU Berlin
Jun 7, 2020

Firing-rate (FR) or neural-mass models are widely used for studying computations performed by neural populations. Despite their success, classical firing-rate models do not capture spike timing effects on the microscopic level such as spike synchronization and are difficult to link to spiking data in experimental recordings. For large neuronal populations, the gap between the spiking neuron dynamics on the microscopic level and coarse-grained FR models on the population level can be bridged by mean-field theory formally valid for infinitely many neurons. It remains however challenging to extend the resulting mean-field models to finite-size populations with biologically realistic neuron numbers per cell type (mesoscopic scale). In this talk, I present a mathematical framework for mesoscopic populations of generalized integrate-and-fire neuron models that accounts for fluctuations caused by the finite number of neurons. To this end, I will introduce the refractory density method for quasi-renewal processes and show how this method can be generalized to finite-size populations. To demonstrate the flexibility of this approach, I will show how synaptic short-term plasticity can be incorporated in the mesoscopic mean-field framework. On the other hand, the framework permits a systematic reduction to low-dimensional FR equations using the eigenfunction method. Our modeling framework enables a re-examination of classical FR models in computational neuroscience under biophysically more realistic conditions.

ePoster

Is the cortical dynamics ergodic? A numerical study in partially-symmetric networks of spiking neurons

Ferdinand Tixidre, Gianluigi Mongillo, Alessandro Torcini

Bernstein Conference 2024

ePoster

Migraine mutation of a Na+ channel induces a switch in excitability type and energetically expensive spikes in an experimentally-constrained model of fast-spiking neurons

Leonardo Preuss, Jan-Hendrik Schleimer, Louisiane Lemaire, Susanne Schreiber

Bernstein Conference 2024

ePoster

A new framework for modeling innate capabilities in network with diverse types of spiking neurons: Probabilistic Skeleton

Christoph Stöckl, Dominik Lang, Alice Dauphin, Wolfgang Maass

Bernstein Conference 2024

ePoster

Local dendritic balance enables the learning of efficient representations in networks of spiking neurons

COSYNE 2022

ePoster

Local dendritic balance enables the learning of efficient representations in networks of spiking neurons

COSYNE 2022

ePoster

Towards hierarchical predictive coding with spiking neurons and dendritic errors

COSYNE 2022

ePoster

Towards hierarchical predictive coding with spiking neurons and dendritic errors

COSYNE 2022