Resources
Authors & Affiliations
Aishwarya Balwani, Alex Wang, Farzaneh Najafi, Hannah Choi
Abstract
Recurrent neural networks (RNNs) have increasingly been employed to model cortical function, but conventional architectures lack biological realism. Notably, they often ignore two key biological principles: i) Dale’s Law, which constrains neurons to either be strictly excitatory or inhibitory, and ii) Sparse, structured connectivity motifs observed in the brain. Incorporating these constraints however poses challenges, as they often affect model performance negatively. Our work presents a novel approach that integrates Dale’s backpropagation, a sign-constrained training method, with topologically-informed pruning to build sparsely-structured RNNs that respect these constraints while maintaining high performance. We apply our methods to reconstruct response time-series from a 2-photon calcium imaging dataset, modelling multi-regional interactions in the visual cortex of mice performing a change detection task. We observe that the results align with experimental findings and support the predictive coding hypothesis, thus offering a modelling framework that simultaneously preserves anatomical structure while maintaining functional capacity, leading to more neuroscientifically reliable insights.