ePoster

How coding constraints affect the shape of neural manifolds

Allan Mancoo,Christian Machens
COSYNE 2022(2022)
Lisbon, Portugal
Presented: Mar 18, 2022

Conference

COSYNE 2022

Lisbon, Portugal

Resources

Authors & Affiliations

Allan Mancoo,Christian Machens

Abstract

While neural population recordings are increasingly high-dimensional, the bulk of recorded activities is usually structured along lower-dimensional manifolds. These manifolds often appear to be nonlinear, so that linear dimensionality reduction methods such as principal component analysis (PCA) yield embedding dimensionalities (ED) that are much higher than the intrinsic dimensionality (ID) of the manifold [Jazayeri, Ostojic, 2021]. Across datasets, the embedding often generates components that resemble higher-order functions of each other, or ‘higher-order components’ (HOCs), thus suggesting some generic, if poorly understood structure of the underlying manifolds. While such manifold structure could reflect underlying mechanisms for information processing, it is unclear why they would yield state-space embeddings with HOCs. Here, we investigate the effect of non-negativity constraints - individual neuronal activity is always non-negative - on neural manifold embeddings, while assuming that intrinsic variables should be read out linearly. We consider both standard network models, which enforce non-negative firing rates through static nonlinearities, and network models in which neuronal activities are solutions to constrained optimization problems [Barrett et al, 2013]. We show in simulations that, when overall population activity is limited in both models, the ED always exceeds the ID. Also, PCA retrieves HOCs that resemble those in data, and we explain this finding geometrically. Furthermore, we show that the combination of homeostatic and non-negativity constraints on optimal neural codes can yield embeddings with increased complexity, in that the ED grows despite a fixed ID. Finally, to test which of the models best describes neural manifolds, we fitted them to real data within a dimensionality reduction setup. While they both outperformed PCA for a given dimensionality, the model with optimal representations outperformed the model with static non-linearities, suggesting that the nonlinearity of neural manifolds may be partly shaped through constrained optimality principles.

Unique ID: cosyne-22/coding-constraints-affect-shape-neural-d058db2c