Resources
Authors & Affiliations
Aaradhya Vaze, Basile Confavreux, Poornima Ramesh, Pedro J. Gonçalves, Jakob H. Macke, Tim P. Vogels
Abstract
Various models of spiking neural networks have been proposed in the literature. These models are governed by their parameters, such as time constants, input currents, and synaptic weights. While we have a good understanding of how these parameters affect the behavior of isolated neurons, a similar intuition for connected groups of neurons is lacking. Here, we simulated millions of neuronal network models using a range of parameters, building a library of dynamics that catalogs network parameters and their resulting dynamics. Towards this end, we collected spike-trains by simulating network models using conductance(COBA)- and current(CUBA)-based leaky integrate-and-fire (LIF), Izhikevich, AdEx, and COBA+NMDA LIF neurons with varying parameters (e.g. network size, connectivity, weights, etc.). We then calculated performance metrics (e.g. neuron- and population-based firing rates, auto-correlation, functions of interspike interval distributions, etc.). We cataloged network spike-trains and their associated metrics as a given library entry. Comparing all entries we discovered scaling properties and invariant relationships between parameters and metrics in different neuron models. For example, scaling of synaptic weights does not merely obey 1/Sqrt(N) relationship as suspected analytically, but instead follows idiosyncratic relationships that depend on model choices. Our library complements the current analytical understanding of network models, and allows us to study model degeneracy, robustness and sensitivity to initialization. It provides qualitative and quantitative insight into how a model component is necessary for network behavior, providing a solid, openly accessible footing for knowledge that was previously passed on anecdotally.