Resources
Authors & Affiliations
Pietro Verzelli, Maximillian Eggl, Tatjana Tchumatchenko
Abstract
Recent studies have shown that synaptic spines undergo spontaneous size fluctuations, a phenomenon that is hypothesized to be critical to understanding synaptic plasticity and neural network stability [1,2]. In this work, we develop a model for spine size dynamics using a simple stochastic process based on minimal biological assumptions. This model effectively captures the key characteristics of spontaneous spine fluctuations, particularly replicating the observed lognormal distribution of spine sizes.
We then use an artificial neural network structure where synaptic strengths fluctuate according to our model to investigate how spontaneous spine dynamics interact with learning processes. Our simulations reveal that the experimentally observed lognormal distribution of spine sizes is compatible with effective learning, allowing the network to encode and retain information even in the presence of spontaneous synaptic changes. Furthermore, we explore the implications of these dynamics for representational drift and continual learning [3], demonstrating that networks can adapt and reorganize their representations over time without compromising their learning capabilities. Our findings provide insights into the resilience and adaptability of neural networks, contributing to our understanding of how neuronal systems maintain functionality in the face of inherent biological variability.