Resources
Authors & Affiliations
Basile Confavreux, Poornima Ramesh, Pedro J. Gonçalves, Jakob H. Macke, Tim P. Vogels
Abstract
Synaptic plasticity is the cornerstone of learning and memory. However, the mechanistic link between individual synaptic changes and emerging network functions remains elusive. Here, we apply a numerical method for meta-learning plasticity rules---filter Simulation-Based Inference (fSBI)---to in vivo data. We identify thousands of co-active excitatory(E)-to-E, E-to-inhibitory(I), I-to-E and I-to-I rules that can drive learning and memory functions in large recurrent spiking networks. We begin with large search spaces of flexibly-parameterized co-active plasticity rules. We then use fSBI to filter for rules that robustly establish cortical-like dynamics, i.e., asynchronous irregular activity with low rates. Interestingly, many successful rules allow for additional functions such as engram formation and graceful forgetting, even though these functions were not included in the filtering. With such sets of rules in hand, we turn to published experimental data in which learning and engram formation have been observed (e.g. Lim et al. Nat Neuro 2015), but for which the spiking rules are unknown (Fig A, B, C). We find entire subspaces of plasticity rules that reproduce qualitative and quantitative aspects of the data, i.e., discernible population responses to familiar vs novel stimuli (Fig D). We analyze these plasticity manifolds to understand the link between plasticity parameters and the emerging network functions. Our results show that thousands of plasticity rules can establish neural function. They need not be finely tuned nor “orchestrated” beyond constraints for basic function, but small changes within the space of feasible solutions will determine specific aspects of a memory.