ePoster

Learning accurate models of very large neural codes with shallow adaptive random projection networks

Jonathan Mayzel, Elad Schneidman
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

Jonathan Mayzel, Elad Schneidman

Abstract

Studying and understanding the code of large neural populations hinges on learning accurate models of population activity. While statistical models based on pairwise interactions are highly accurate in describing the code of tens of neurons, accurate models of >100 neurons require higher order dependencies. Statistical models based on sparse nonlinear Random Projections (RP) of the population proved to be highly accurate and efficient for 100-200 neurons. The scaling of these models to larger populations is challenging yet critical for mapping the semantic organization of population codes, their dynamics, and how they change with learning. Here we use an extension of RP models to study the simultaneous and temporal structure of the code of many hundreds of neurons from the mouse and primate cortices. In these reshaped RP models, the random projections are locally optimized in a way that resembles synaptic modification. We show that reshaped RP models are far more accurate than pairwise or RP models for simultaneous spiking patterns of ~500 neurons from the mouse and temporal patterns of over 150 neurons in monkeys - using far fewer projections. As evaluating models of such large populations is challenging, we use two new metrics to quantify their performance: first, they recapitulate the detailed spiking statistics of many randomly selected groups of 10 neurons, and second, they capture the synchrony of large groups to a high degree of accuracy. Following suggestions that large populations may be poised near criticality, and that renormalization group approach to pairwise models might reflect such behavior, we applied a coarse-graining approach to learn hierarchical RP models. We found that the reshaped RP models outperform these coarse-grained models for all datasets we considered. Our results suggest that sparse, random, and compact shallow models are sufficient for learning accurate and efficient models of large neural codes.

Unique ID: cosyne-25/learning-accurate-models-very-large-d726335d