Resources
Authors & Affiliations
Kaiwen Sheng, Brendan Bicknell, Beverly Clark, Michael Hausser
Abstract
Cortical circuits are built from diverse collections of neurons. Recent large-scale surveys have characterized this striking variability in terms of molecular, morphological, and electrophysiological features, identifying hundreds to thousands of distinct neuronal cell types. However, in computational neuroscience, we often stick our heads in the sand and ignore these details, instead basing theory on homogeneous populations of simple ‘point neurons'. The problem with this approach is that it's currently unknown what is lost with such simplifying assumptions. The simplification may be entirely benign – perhaps biologically distinct neurons can fulfill identical functions as long as synaptic strengths and ionic conductances are set appropriately. On the other hand, if particular cell types are intrinsically well-suited for some roles and not others, then revealing these specializations will be crucial for understanding circuit-level logic.
Here, by integrating classical biophysical modeling with machine learning, we investigate whether the biological diversity of neurons indeed translates into an associated computational diversity. By training a database of experimentally validated models of neurons to perform a set of canonical tasks (i.e. input-output transformations), we identify qualitative differences between major cortical cell types. Whereas pyramidal neurons and 5-HT3A receptor-expressing neurons excel at discriminating temporal sequences of input and performing context-dependent computations, parvalbumin-expressing neurons are relatively poor at these tasks, but superior at others. Analyzing the database of trained models, we attribute differences in performance to specific features of neuronal morphology and electrophysiology, yielding mechanistic interpretations of our results.
Thus, from both biological and computational viewpoints, not all neurons are created equal. Incorporating this reality into models of the brain will be essential for connecting theory with increasingly rich experimental datasets, understanding and abstracting biological complexity, and may yield radical new ideas about the algorithmic principles of neural function.