← Back

Covariance

Topic spotlight
TopicWorld Wide

covariance

Discover seminars, jobs, and research tagged with covariance across World Wide.
8 curated items4 Seminars4 ePosters
Updated over 2 years ago
8 items · covariance
8 results
SeminarNeuroscienceRecording

The strongly recurrent regime of cortical networks

David Dahmen
Jülich Research Centre, Germany
Mar 28, 2023

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.

SeminarNeuroscienceRecording

Network inference via process motifs for lagged correlation in linear stochastic processes

Alice Schwarze
Dartmouth College
Nov 16, 2022

A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.

SeminarNeuroscience

Bridging brain and cognition: A multilayer network analysis of brain structural covariance and general intelligence in a developmental sample of struggling learners

Ivan Simpson-Kent
University of Cambridge, MRC CBU
Jun 1, 2021

Network analytic methods that are ubiquitous in other areas, such as systems neuroscience, have recently been used to test network theories in psychology, including intelligence research. The network or mutualism theory of intelligence proposes that the statistical associations among cognitive abilities (e.g. specific abilities such as vocabulary or memory) stem from causal relations among them throughout development. In this study, we used network models (specifically LASSO) of cognitive abilities and brain structural covariance (grey and white matter) to simultaneously model brain-behavior relationships essential for general intelligence in a large (behavioral, N=805; cortical volume, N=246; fractional anisotropy, N=165), developmental (ages 5-18) cohort of struggling learners (CALM). We found that mostly positive, small partial correlations pervade both our cognitive and neural networks. Moreover, calculating node centrality (absolute strength and bridge strength) and using two separate community detection algorithms (Walktrap and Clique Percolation), we found convergent evidence that subsets of both cognitive and neural nodes play an intermediary role between brain and behavior. We discuss implications and possible avenues for future studies.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 24, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

ePoster

The scale-invariant covariance spectrum of brain-wide activity in larval zebrafish

Zezhen Wang, Weihao Mai, Yuming Chai, Chen Shen, Kexin Qi, Yu Hu, Quan Wen

COSYNE 2023

ePoster

Changes in tuning curves, not neural population covariance, improve category separability in the primate ventral visual pathway

Jenelle Feather, Long Sha, Gouki Okazawa, Nga Yu Lo, SueYeon Chung, Roozbeh Kiani

COSYNE 2025

ePoster

Covariance spectrum in nonlinear recurrent neural networks and transition to chaos

Xuanyu Shen, Yu Hu

COSYNE 2025

ePoster

Structural covariance & graph-learning for the individualized classification of schizophrenia patients

Clara Vetter

Neuromatch 5