Resources
Authors & Affiliations
Michael Deistler, Kyra Kadhim, Jonas Beck, Matthijs Pals, Janne Lappalainen, Manuel Gloeckler, Ziwei Huang, Cornelius Schroeder, Philipp Berens, Pedro Gonçalves, Jakob Macke
Abstract
Biophysical neuron models provide mechanistic insight about empirically observed phenomena. However, biophysical neuron models are expensive to simulate, thereby limiting the scale of biophysical models, and optimizing their parameters is notoriously difficult, thereby preventing the fitting of biophysical models to physiologically meaningful tasks or datasets (panel a). We built a new toolbox for simulation and inference in neuroscience, Jaxley, which allows to parallelize simulations on (multiple) GPUs and to compute the gradient of simulations with respect to parameters (panel b, c). We used Jaxley to build biophysically detailed models of neural computation, and to optimize (thousands of) biophysical parameters on large datasets. We applied Jaxley to a range of datasets and models and demonstrated that it can optimize thousands of parameters at the cell-level (e.g., channel conductances) and network-level (e.g., synaptic conductances). First, we built a "detail-on-demand-model" of the retina with simplified photoreceptors and bipolar cells and a morphologically-detailed retinal ganglion cell. We optimized the respective synaptic and channel conductances (400 parameters) on dendritic calcium recordings and found that the model exhibits compartmentalized responses, despite not having been explicitly trained to do so (panel d, bottom). Second, we built a recurrent neural network (RNN) model with biophysically-detailed single neurons with a variety of ion channels. We trained this network on a evidence integration task (panel e) and a delayed-match-to-sample task (not shown) and despite the vastly different timescales of biophysical and behavioural mechanisms, we found biophysical mechanisms that allow the network to perform both working memory tasks. Finally, we trained a network consisting of morphologically detailed spiking neurons to solve MNIST with 100k parameters (panel f). Despite all nonlinearities stemming from biophysical mechanisms -- rather than artificial nonlinearities (e.g. ReLU) --, the biophysical network achieved a high classification accuracy of 95.6%. Overall, Jaxley is a flexible and easy-to-use toolbox which will help bridge systems neuroscience and biophysics, allowing new insights and opportunities for multiscale neuroscience.