Resources
Authors & Affiliations
Jacopo Fadanni, Giacomo Gasparotto, Rosalba Pacelli, Marco Dal Maschio, Marco Salamanca, Marica Albanesi, Pietro Rotondo, Michele Allegra
Abstract
Artificial recurrent neural networks (RNNs) have become a customary research tool in theoretical neuroscience, being used as models of biological neural networks performing cognitive tasks. While RNNs are not generally intended as faithful, biologically realistic models of neuronal activity, there is a broad expectation that RNN dynamics may provide good models at the level of neural population dynamics. According to several authors, in both cases population activity should organize into low-dimensional manifolds encoding the relevant task variables (1). However, strong tests of this hypothesis, based on a geometrically rigorous characterization of activity manifolds in RNN and biological networks, are lacking.
Here, we used a recently developed geometric intrinsic dimension estimator (2) to directly test the low-dimensionality hypothesis in RNNs trained on several tasks. To this aim, we focused on the task battery from (3), involving several simple stimulus-response associations. Consistently with expectations, the intrinsic dimension of RNN was found to be generally low (<5), despite the presence of noise (Fig. 1A).
In addition, we collected whole-brain neuronal recordings (obtained via calcium imaging with the setup described in (4)) of a zebrafish larva performing a common visuomotor response (5). Leveraging an atlas of the zebrafish brain, we then investigated the intrinsic dimension of neural population activity in different brain areas. Contrary to what observed in artificial RNNs, the activity manifolds obtained in animal data were high-dimensional (ID >20), despite the apparent simplicity of the task (Fig. 1B). This result is in line with recent findings in large-scale recordings in rodents (6), reporting a high dimensionality of activity even in absence of overt stimuli.
The observed discrepancy in the manifold organization of artificial and biological neural networks suggests that neural data do not simply reflect task stimuli, but are affected by additional sources of variability, not easily accounted for by simple RNN models. Our results thus warn against a direct, naive comparison of RNNs results with biological recordings, and call for a more extensive investigation of what determines the structure of the relatively large-dimensional activity manifold observed in vivo.