Computational
computational capabilities
Introducing dendritic computations to SNNs with Dendrify
Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.
Computation in the neuronal systems close to the critical point
It was long hypothesized that natural systems might take advantage of the extended temporal and spatial correlations close to the critical point to improve their computational capabilities. However, on the other side, different distances to criticality were inferred from the recordings of nervous systems. In my talk, I discuss how including additional constraints on the processing time can shift the optimal operating point of the recurrent networks. Moreover, the data from the visual cortex of the monkeys during the attentional task indicate that they flexibly change the closeness to the critical point of the local activity. Overall it suggests that, as we would expect from common sense, the optimal state depends on the task at hand, and the brain adapts to it in a local and fast manner.
Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics
Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.
Local and global organization of synaptic inputs on cortical dendrites
Synaptic inputs on cortical dendrites are organized with remarkable subcellular precision at the micron level. This organization emerges during early postnatal development through patterned spontaneous activity and manifests both locally where synapses with similar functional properties are clustered, and globally along the axis from dendrite to soma. Recent experiments reveal species-specific differences in the local and global synaptic organization in mouse, ferret and macaque visual cortex. I will present a computational framework that implements functional and structural plasticity from spontaneous activity patterns to generate these different types of organization across species and scales. Within this framework, a single anatomical factor - the size of the visual cortex and the resulting magnification of visual space - can explain the observed differences. This allows us to make predictions about the organization of synapses also in other species and indicates that the proximal-distal axis of a dendrite might be central in endowing a neuron with powerful computational capabilities.
Synaptic-type-specific clustering optimizes the computational capabilities of balanced recurrent networks
COSYNE 2023