Resources
Authors & Affiliations
Maryada Maryada, Chiara De Luca, Arianna Rubino, Chenxi Wen, Melika Payvand, Giacomo Indiveri
Abstract
Analog neuromorphic circuits emulate the dynamic properties of biological neural systems through their physics [1], sharing similarities and constraints with them [2,3]. This alignment can be leveraged for fundamental research to validate or challenge theories of neural computation as well as for applied research to develop compact, power-efficient brain-inspired computing systems.
We present a novel spike-based dendritic learning mechanism embedded in a balanced cortical network, co-designed with neuromorphic circuits, that aligns theoretical neuroscience with analog circuit design techniques for optimising network robustness and performance. By embracing the variability present in both biological and silicon neural circuits, the model achieves robust and reliable computation.
We implement an always-on learning paradigm using multi-compartmental pyramidal neurons with passive apical (top-down) and basal (sensory) dendrites (fig b), and dis-inhibition via a VIP-SST circuit motif [5].
The plasticity rule is an error-based 'Δ-rule' gated by postsynaptic Ca2+ concentration (θ, proxy for f.r.) [4] (fig c). It is applied to the sensory basal input synapses of the pyramidal neuron. Neurons receiving both the sensory and teacher input have Ca2+ levels that tend to reach the LTP region while those receiving only sensory input tend to fall into the LTD region. A global contextual signal enables VIP to inhibit SST cells, allowing teacher signals to pass through without attenuation. In absence of this signal, SST cells remain active and suppress the pyramidal neurons, keeping them in the inference region.
The learning circuits exploit dis-inhibition and learning thresholds to seamlessly switch between training-mode and inference-mode (fig d).
$$ \Delta w(t) = \begin{cases} \eta (I_a - I_b) \sigma_{LTP} & \text{if } I_a \ge I_b \\ \eta (I_a - I_b) \sigma_{LTD} & \text{if } I_a < I_b \end{cases} $$
$$ \sigma_{LTP} = \begin{cases} 1 & \text{if } \theta_{LTP_-} < \theta(t) < \theta_{LTP_+} \\ 0 & \text{otherwise} \end{cases} $$
$$ \sigma_{LTD} = \begin{cases} 1 & \text{if } \theta_{LTD_-} < \theta(t) < \theta_{LTD_+} \\ 0 & \text{otherwise} \end{cases} $$
To validate the model, we trained the network (fig a) to distinguish two patterns with varying degrees of overlapping (fig c,e). All plastic synapses are implemented in hardware using 4-bit resolution. We also demonstrated the model's performance across various degrees of parameter heterogeneity (fig e-f)