Resources
Authors & Affiliations
Yu Feng,Nicolas Brunel
Abstract
It is widely believed that storing and maintaining memories on long-time scales depends on modifying synapses in the brain in an activity-dependent way. Classical studies of learning and memory in neural networks model synaptic efficacy as a continuous or discrete scalar value [1–3]. Theoretical work has shown such models have a reasonably large capacity, especially in the biologically relevant sparse coding limit [4]. However, multiple recent results suggest an intermediate scenario in which synaptic efficacy can be described by a continuous variable, but whose distribution is peaked around a small set of discrete values [5,6]. Motivated by these results, we explored a model in which synapses are described by a continuous variable that evolves in a potential with multiple minima. External inputs to the network can switch synapses between different potential wells. This model can interpolate between models with discrete synapses in which synapses have a deep potential, and models with continuous synapses in which synapses have a flat potential. Our results show that the model with metastable synapses is more robust with respect to the noise compared to models with continuous synapses, and has an enhanced capacity compared to models with discrete synapses. Our results indicate that metastable synapses are critical for the neural network to maintain a large and robust storage capacity.