TopicNeuroscience

continuous time

Content Overview
3Total items
2Seminars
1ePoster

Latest

SeminarNeuroscienceRecording

Event-based Backpropagation for Exact Gradients in Spiking Neural Networks

Christian Pehle
Heidelberg University
Nov 3, 2021

Gradient-based optimization powered by the backpropagation algorithm proved to be the pivotal method in the training of non-spiking artificial neural networks. At the same time, spiking neural networks hold the promise for efficient processing of real-world sensory data by communicating using discrete events in continuous time. We derive the backpropagation algorithm for a recurrent network of spiking (leaky integrate-and-fire) neurons with hard thresholds and show that the backward dynamics amount to an event-based backpropagation of errors through time. Our derivation uses the jump conditions for partial derivatives at state discontinuities found by applying the implicit function theorem, allowing us to avoid approximations or substitutions. We find that the gradient exists and is finite almost everywhere in weight space, up to the null set where a membrane potential is precisely tangent to the threshold. Our presented algorithm, EventProp, computes the exact gradient with respect to a general loss function based on spike times and membrane potentials. Crucially, the algorithm allows for an event-based communication scheme in the backward phase, retaining the potential advantages of temporal sparsity afforded by spiking neural networks. We demonstrate the optimization of spiking networks using gradients computed via EventProp and the Yin-Yang and MNIST datasets with either a spike time-based or voltage-based loss function and report competitive performance. Our work supports the rigorous study of gradient-based optimization in spiking neural networks as well as the development of event-based neuromorphic architectures for the efficient training of spiking neural networks. While we consider the leaky integrate-and-fire model in this work, our methodology generalises to any neuron model defined as a hybrid dynamical system.

SeminarNeuroscience

Using Nengo and the Neural Engineering Framework to Represent Time and Space

Terry Stewart
University of Waterloo and National Research Center Canada
Jul 15, 2020

The Neural Engineering Framework (and the associated software tool Nengo) provide a general method for converting algorithms into neural networks with an adjustable level of biological plausibility. I will give an introduction to this approach, and then focus on recent developments that have shown new insights into how brains represent time and space. This will start with the underlying mathematical formulation of ideal methods for representing continuous time and continuous space, then show how implementing these in neural networks can improve Machine Learning tasks, and finally show how the resulting systems compare to temporal and spatial representations in biological brains.

ePosterNeuroscience

Learning Hebbian/Anti-Hebbian networks in continuous time

Henrique Reis Aguiar, Matthias Hennig

Bernstein Conference 2024

continuous time coverage

3 items

Seminar2
ePoster1

Share your knowledge

Know something about continuous time? Help the community by contributing seminars, talks, or research.

Contribute content
Domain spotlight

Explore how continuous time research is advancing inside Neuroscience.

Visit domain

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.