ePoster

Evolutionary algorithms support recurrent plasticity in spiking neural network models of neocortical task learning

Ivyer Qu, Huaze Liu, Jiayue Li, Yuqing Zhu
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

Ivyer Qu, Huaze Liu, Jiayue Li, Yuqing Zhu

Abstract

Task-trained recurrent spiking neural networks (RSNNs) can provide insights into how the brain performs spike-based computations, especially those involved in temporal tasks. Training RSNNs with backpropagation through time (BPTT) faces the challenge of non-differentiable spiking functions, requiring an approximation gradient through each time state of the network. Evolutionary Algorithms (EAs) offer an alternative to BPTT by generating random populations of models and selecting those with the best performance to provide a broader initial search space and the ability to optimize non-differentiable functions (1). When training RSNNs with BPTT, we observe reservoir-like behavior, in which changes in the output layer weights support learning while main recurrent weights largely do not change (2). It has been unclear whether this behavior is due to improper gradient backpropagation through the recurrent layer or whether it is because reservoirs are the most optimal solution to learning from the high dimensional dynamics of the recurrent layer. By comparing RSNNs trained using BPTT and EAs, we investigate changes in the different layers of the models throughout training on temporal tasks (3). Our RSNN models have three layers: a linear input layer, a hidden recurrent layer comprised of leaky integrate and fire neurons with three different inhibitory neuron types, and a linear output layer (4,5). The recurrent layer has biologically realistic connectivity found in mouse neocortex (6). From initial to final population models, RSNNs trained using EAs have the greatest weight changes in the recurrent layer connections, whereas BPTT-trained RSNNs have greatest weight changes in the input and output layers (see Figure). Our results demonstrate that reservoirs are not always the optimal solution for temporal tasks in RSNNs, as alternative network solutions – involving genuine changes in recurrent connectivity – are discovered via EAs. Furthermore, training RSNNs with EAs can better capture real recurrent plasticity of the brain compared to training RSNNs with BPTT. This makes EAs highly valuable for future investigations into how recurrent neocortical circuits can change their structure to support spike-based computations.

Unique ID: bernstein-24/evolutionary-algorithms-support-b40f386a