Latest

SeminarNeuroscienceRecording

The strongly recurrent regime of cortical networks

David Dahmen
Jülich Research Centre, Germany
Mar 29, 2023

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.

SeminarNeuroscienceRecording

High-dimensional geometry of visual cortex

Carsen Stringer_
Janelia Research Campus
Jun 25, 2020

Interpreting high-dimensional datasets requires new computational and analytical methods. We developed such methods to extract and analyze neural activity from 20,000 neurons recorded simultaneously in awake, behaving mice. The neural activity was not low-dimensional as commonly thought, but instead was high-dimensional and obeyed a power-law scaling across its eigenvalues. We developed a theory that proposes that neural responses to external stimuli maximize information capacity while maintaining a smooth neural code. We then observed power-law eigenvalue scaling in many real-world datasets, and therefore developed a nonlinear manifold embedding algorithm called Rastermap that can capture such high-dimensional structure.

eigenvalues coverage

2 items

Seminar2
Domain spotlight

Explore how eigenvalues research is advancing inside Neuro.

Visit domain