Spiking Recurrent Neural Networks
Spiking Recurrent Neural Networks
Dr. Fleur Zeldenrust
We are looking for a postdoctoral researcher to study the effects of neuromodulators in biologically realistic networks and learning tasks in the Vidi project 'Top-down neuromodulation and bottom-up network computation, a computational study'. You will use cellular and behavioural data gathered by our department over the previous five years on dopamine, acetylcholine and serotonin in mouse barrel cortex, to bridge the gap between single cell, network and behavioural effects. The aim of this project is to explain the effects of neuromodulation on task performance in biologically realistic spiking recurrent neural networks (SRNNs). You will use biologically realistic learning frameworks, such as force learning, to study how network structure influences task performance. You will use existing open source data to train a SRNN on a pole detection task (for rodents using their whiskers) and incorporate realistic network properties of the (barrel) cortex based on our lab's measurements. Next, you will incorporate the cellular effects of dopamine, acetylcholine and serotonin that we have measured into the network, and investigate their effects on task performance. In particular, you will research the effects of biologically realistic network properties (balance between excitation and inhibition and the resulting chaotic activity, non-linear neuronal input-output relations, patterns in connectivity, Dale's law) and incorporate known neuron and network effects. You will build on the single cell data, network models and analysis methods available in our group, and your results will be incorporated into our group's further research to develop and validate efficient coding models of (somatosensory) perception. We are therefore looking for a team player who can collaborate well with the other group members, and is willing to both learn from them and share their knowledge.
Online Training of Spiking Recurrent Neural Networks With Memristive Synapses
Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.
An investigation of perceptual biases in spiking recurrent neural networks trained to discriminate time intervals
Magnitude estimation and stimulus discrimination tasks are affected by perceptual biases that cause the stimulus parameter to be perceived as shifted toward the mean of its distribution. These biases have been extensively studied in psychophysics and, more recently and to a lesser extent, with neural activity recordings. New computational techniques allow us to train spiking recurrent neural networks on the tasks used in the experiments. This provides us with another valuable tool with which to investigate the network mechanisms responsible for the biases and how behavior could be modeled. As an example, in this talk I will consider networks trained to discriminate the durations of temporal intervals. The trained networks presented the contraction bias, even though they were trained with a stimulus sequence without temporal correlations. The neural activity during the delay period carried information about the stimuli of the current trial and previous trials, this being one of the mechanisms that originated the contraction bias. The population activity described trajectories in a low-dimensional space and their relative locations depended on the prior distribution. The results can be modeled as an ideal observer that during the delay period sees a combination of the current and the previous stimuli. Finally, I will describe how the neural trajectories in state space encode an estimate of the interval duration. The approach could be applied to other cognitive tasks.
Effective and Efficient Computation with Multiple-timescale Spiking Recurrent Neural Networks
The emergence of brain-inspired neuromorphic computing as a paradigm for edge AI is motivating the search for high-performance and efficient spiking neural networks to run on this hardware. However, compared to classical neural networks in deep learning, current spiking neural networks lack competitive performance in compelling areas. Here, for sequential and streaming tasks, we demonstrate how spiking recurrent neural networks (SRNN) using adaptive spiking neurons are able to achieve state-of-the-art performance compared to other spiking neural networks and almost reach or exceed the performance of classical recurrent neural networks (RNNs) while exhibiting sparse activity. From this, we calculate a 100x energy improvement for our SRNNs over classical RNNs on the harder tasks. We find in particular that adapting the timescales of spiking neurons is crucial for achieving such performance, and we demonstrate the performance for SRNNs for different spiking neuron models.
Unraveling perceptual biases: Insights from spiking recurrent neural networks
Bernstein Conference 2024