ePoster

Bounds on the computational complexity of neurons due to dendritic morphology

Anamika Agrawal, Michael Buice
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

Anamika Agrawal, Michael Buice

Abstract

Model neurons are often considered as simple linear threshold units. However, the diversity and cell-type specificity of morphoelectric properties of dendritic trees, as well as observations of active integration of inputs to dendrites, suggest that individual cell types might have non-linear processing leading to more complex computational properties. The limited number and distribution of such non-linear processes put neurons in an intermediate category between a simple linear threshold unit and a large deep network with millions of units. In this work, we examine bounds on the complexity of single neuron computation imposed by their dendritic morphology. We investigate this by constructing two idealized model circuits representing architectures similar to basal and apical dendritic tufts, differing in the degree of branching and depth. While the addition of non-linear intermediate units in these models allows the model neuron to learn non-linearly separable problems, here we ask about the upper bounds of learnable functions. We focus specifically on Boolean functions, as their complexity is mathematically well characterized. As expected, these model circuits are capable of learning non-linearly separable functions in multiple dimensions. Importantly, we show that there is an upper critical dimension (roughly six for each type) beyond which random Boolean functions have a low probability of being learnable. This makes even relatively simple dendritic integrators substantially more computationally powerful than linear units. Moreover, we show that the space of learnable functions for each type is disjoint. We also examine the learnability of functions according metrics of complexity, namely entropy and sensitivity. We demonstrate that the shallow, broad basal model has a higher probability of learning low-sensitivity functions whereas the deeper, apical model shows faster retraining of those same functions. Overall, our work demonstrates that cell-type dependent properties of dendritic architecture can impact computational properties of neurons.

Unique ID: cosyne-25/bounds-computational-complexity-cdea998f