ePoster

Co-evolved structural and temporal network heterogeneity

Stefan Iacob, Nishant Joshi, Joni Dambre, Fleur Zeldenrust
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

Stefan Iacob, Nishant Joshi, Joni Dambre, Fleur Zeldenrust

Abstract

Contrary to typical artificial neural network (ANN) design, biological neurons are not identical. Neurons differ substantially in their physiological properties. Heterogeneity has been hypothesized to increase the dimensionality of the neural dynamics, which improves the encoding properties of a network [1], promotes robustness and stability [2], and maximizes information flow in large networks [3]. We aim to show the functional effect of heterogeneity in rate-based recurrent neural networks. To vary the degree of heterogeneity, we introduce neuron types, with each neuron type having its own intrinsic parameter (e.g., weight, bias, leak rate, connectivity) distribution, resulting in a multimodal network-wide parameter distribution. We hypothesize that ANNs with more neuron types result in better task performance, memory capacity that corresponds to task requirements, and better noise robustness. We compare single neuron type networks with heterogeneous networks using the echo state networks (ESN) framework [4]. ESNs rely on a randomly initialized, usually fixed RNN that is driven by a task input signal. A linear decoder (read-out layer) is trained to estimate the correct task labels based on network activity. Although training a single read-out layer is fast, optimizing the RNN parameter distribution requires many network evaluations. We optimize these task-specific hyperparameters using an evolutionary algorithm (CMA-ES [5]). We validate our networks on the NARMA-30 [6] and Mackey-Glass [7] tasks, two benchmark tasks commonly used for ESNs. The use of rate-based neurons greatly simplifies network simulation and read-out training, but lacks the temporal complexity and diversity of spiking models. Recent work highlights the importance of heterogeneity in temporal parameters for task performance [8, 9]. We use a recent variation of ESNs, called distance-based delay networks (DDN) [10], that include distance dependent axonal delays, further increasing the network heterogeneity. In Figure 1, we show the DDN training procedure. The read-out layer (linear regression) is trained by minimizing the normalized root mean squared error (NRMSE) between the task labels and the predictions. In the inset, we show preliminary validation results of evolving different DDN types with increasing heterogeneity. We observe improved task performance and faster convergence during hyperparameter optimization when using DDNs with multiple neuron types.

Unique ID: bernstein-24/co-evolved-structural-temporal-network-5b381766