Resources
Authors & Affiliations
Tobias Kühn, Rémi Monasson
Abstract
Since their introduction as a theoretical concept in the 70es by Amari, ring attractor networks have been a popular tool to explain orientation and space dependence of neural activity. Only recently, there has been an experimental proof that they are actually implemented in the fly's brain (Kim et al. 2017).
While these kind of networks are useful to model systems representing only one map, like head-direction systems for example, there is a multitude of maps stored in the hippocampus (cf. panel a of the figure). This extension is taken into account by continuous attractor neural networks (CANNs). Due to the presence of these other maps, CANNs are unavoidably plagued by noise. It is therefore a matter of ongoing debate if they can serve as a useful paradigm to explain the coding of spatial information in neural systems or if noise prevents the reliable storage of positional information.
In our work, we address this question by computing the Fisher information in a CANN-like network (panel b), using methods from the theory of disordered systems. We observe that the decay of the Fisher information is slow for not too large disorder strength (panel c). In particular, increasing the strength of the disorder from zero to a small finite value leaves the Fisher information unchanged to first order (the derivative vanishes). This indicates that for a certain parameter range, information is preserved in CANNs despite the detrimental influence of disorder. We furthermore show that in this regime, a considerable part of this information can be extracted by a linear readout (panel d).