Latest

SeminarNeuroscienceRecording

A biologically plausible inhibitory plasticity rule for world-model learning in SNNs

Z. Liao
Columbia
Nov 10, 2022

Memory consolidation is the process by which recent experiences are assimilated into long-term memory. In animals, this process requires the offline replay of sequences observed during online exploration in the hippocampus. Recent experimental work has found that salient but task-irrelevant stimuli are systematically excluded from these replay epochs, suggesting that replay samples from an abstracted model of the world, rather than verbatim previous experiences. We find that this phenomenon can be explained parsimoniously and biologically plausibly by a Hebbian spike time-dependent plasticity rule at inhibitory synapses. Using spiking networks at three levels of abstraction–leaky integrate-and-fire, biophysically detailed, and abstract binary–we show that this rule enables efficient inference of a model of the structure of the world. While plasticity has previously mainly been studied at excitatory synapses, we find that plasticity at excitatory synapses alone is insufficient to accomplish this type of structural learning. We present theoretical results in a simplified model showing that in the presence of Hebbian excitatory and inhibitory plasticity, the replayed sequences form a statistical estimator of a latent sequence, which converges asymptotically to the ground truth. Our work outlines a direct link between the synaptic and cognitive levels of memory consolidation, and highlights a potential conceptually distinct role for inhibition in computing with SNNs.

SeminarNeuroscienceRecording

Training Dynamic Spiking Neural Network via Forward Propagation Through Time

B. Yin
CWI
Nov 10, 2022

With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.

SeminarNeuroscienceRecording

Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing

A. Subramoney
University of Bochum
Nov 9, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.

SeminarNeuroscienceRecording

General purpose event-based architectures for deep learning

Anand Subramoney
Institute for Neural Computation
Oct 5, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST

SeminarNeuroscienceRecording

Introducing dendritic computations to SNNs with Dendrify

Michalis Pagkalos
IMBB FORTH
Sep 7, 2022

Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.

SeminarNeuroscienceRecording

Trading Off Performance and Energy in Spiking Networks

Sander Keemink
Donders Institute for Brain, Cognition and Behaviour
Jun 1, 2022

Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.

SeminarNeuroscienceRecording

Edge Computing using Spiking Neural Networks

Shirin Dora
Loughborough University
Nov 5, 2021

Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).

SeminarNeuroscienceRecording

Efficient GPU training of SNNs using approximate RTRL

James Knight
University of Sussex
Nov 3, 2021

Last year’s SNUFA workshop report concluded “Moving toward neuron numbers comparable with biology and applying these networks to real-world data-sets will require the development of novel algorithms, software libraries, and dedicated hardware accelerators that perform well with the specifics of spiking neural networks” [1]. Taking inspiration from machine learning libraries — where techniques such as parallel batch training minimise latency and maximise GPU occupancy — as well as our previous research on efficiently simulating SNNs on GPUs for computational neuroscience [2,3], we are extending our GeNN SNN simulator to pursue this vision. To explore GeNN’s potential, we use the eProp learning rule [4] — which approximates RTRL — to train SNN classifiers on the Spiking Heidelberg Digits and the Spiking Sequential MNIST datasets. We find that the performance of these classifiers is comparable to those trained using BPTT [5] and verify that the theoretical advantages of neuron models with adaptation dynamics [5] translate to improved classification performance. We then measured execution times and found that training an SNN classifier using GeNN and eProp becomes faster than SpyTorch and BPTT after less than 685 timesteps and much larger models can be trained on the same GPU when using GeNN. Furthermore, we demonstrate that our implementation of parallel batch training improves training performance by over 4⨉ and enables near-perfect scaling across multiple GPUs. Finally, we show that performing inference using a recurrent SNN using GeNN uses less energy and has lower latency than a comparable LSTM simulated with TensorFlow [6].

SeminarNeuroscienceRecording

Optimal initialization strategies for Deep Spiking Neural Networks

Julia Gygax
Friedrich Miescher Institute for Biomedical Research (FMI)
Nov 3, 2021

Recent advances in neuromorphic hardware and Surrogate Gradient (SG) learning highlight the potential of Spiking Neural Networks (SNNs) for energy-efficient signal processing and learning. Like in Artificial Neural Networks (ANNs), training performance in SNNs strongly depends on the initialization of synaptic and neuronal parameters. While there are established methods of initializing deep ANNs for high performance, effective strategies for optimal SNN initialization are lacking. Here, we address this gap and propose flexible data-dependent initialization strategies for SNNs.

SeminarNeuroscienceRecording

Machine Learning with SNNs for low-power inference on neuromorphic hardware

Dylan Muir
SynSense
Nov 3, 2021
SeminarNeuroscienceRecording

E-prop: A biologically inspired paradigm for learning in recurrent networks of spiking neurons

Franz Scherr
Technische Universität Graz
Aug 31, 2020

Transformative advances in deep learning, such as deep reinforcement learning, usually rely on gradient-based learning methods such as backpropagation through time (BPTT) as a core learning algorithm. However, BPTT is not argued to be biologically plausible, since it requires to a propagate gradients backwards in time and across neurons. Here, we propose e-prop, a novel gradient-based learning method with local and online weight update rules for recurrent neural networks, and in particular recurrent spiking neural networks (RSNNs). As a result, e-prop has the potential to provide a substantial fraction of the power of deep learning to RSNNs. In this presentation, we will motivate e-prop from the perspective of recent insights in neuroscience and show how these have to be combined to form an algorithm for online gradient descent. The mathematical results will be supported by empirical evidence in supervised and reinforcement learning tasks. We will also discuss how limitations that are inherited from gradient-based learning methods, such as sample-efficiency, can be addressed by considering an evolution-like optimization that enhances learning on particular task families. The emerging learning architecture can be used to learn tasks by a single demonstration, hence enabling one-shot learning.

SNNs coverage

12 items

Seminar12
Domain spotlight

Explore how SNNs research is advancing inside Neuro.

Visit domain