Resources
Authors & Affiliations
Raymond Wang,Louis Kang
Abstract
A central function of continuous attractor networks is encoding coordinates and accurately updating their values through path integration. To do so, these networks produce localized bumps of activity that move coherently in response to velocity inputs. In the brain, continuous attractors are believed to underlie grid cells and head direction cells that maintain periodic representations of position and orientation, respectively. However, these representations can be achieved with any number of activity bumps, and the consequences of having more or fewer bumps are unclear. We construct 1D ring attractor networks with different bump numbers and characterize their responses to three types of noise: fluctuating inputs at each timestep, spiking noise, and deviations in connectivity away from ideal attractor configurations. Across all these types, networks with more bumps experience less noise-driven deviations in bump motion. This translates to more robust encodings of linear coordinates such as position, assuming that each neuron represents a fixed length no matter the bump number. Alternatively, we consider that the network encodes a circular coordinate such as orientation, where the network distance between adjacent bumps always maps onto 360 degrees. Under this mapping, the coordinate readout generally exhibits less noise-driven error in networks with fewer attractor bumps. We have developed a mathematical theory that quantitatively explains these results. Thus, to suppress the effects of biologically relevant noise, continuous attractor networks should employ more bumps when encoding linear coordinates and fewer bumps when encoding circular coordinates. Our findings provide motivation for the presence of multiple bumps in the mammalian grid network and a single bump in the Drosophila head direction network.