Latest

SeminarNeuroscienceRecording

Neural networks in the replica-mean field limits

Thibaud Taillefumier
The University of Texas at Austin
Nov 30, 2022

In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.

SeminarNeuroscience

Power in Network Structures

Fuad Aleskerov
HSE University
Apr 28, 2022

We consider new measures of centrality in networks which take into account parameters of nodes and group influence of nodes to nodes. Several examples are discussed.

SeminarNeuroscienceRecording

Bidirectionally connected cores in a mouse connectome: Towards extracting the brain subnetworks essential for consciousness

Jun Kitazono
University of Tokyo
Oct 1, 2021

Where in the brain consciousness resides remains unclear. It has been suggested that the subnetworks supporting consciousness should be bidirectionally (recurrently) connected because both feed-forward and feedback processing are necessary for conscious experience. Accordingly, evaluating which subnetworks are bidirectionally connected and the strength of these connections would likely aid the identification of regions essential to consciousness. Here, we propose a method for hierarchically decomposing a network into cores with different strengths of bidirectional connection, as a means of revealing the structure of the complex brain network. We applied the method to a whole-brain mouse connectome. We found that cores with strong bidirectional connections consisted of regions presumably essential to consciousness (e.g., the isocortical and thalamic regions, and claustrum) and did not include regions presumably irrelevant to consciousness (e.g., cerebellum). Contrarily, we could not find such correspondence between cores and consciousness when we applied other simple methods which ignored bidirectionality. These findings suggest that our method provides a novel insight into the relation between bidirectional brain network structures and consciousness. Our recent preprint on this work is here: https://doi.org/10.1101/2021.07.12.452022.

SeminarNeuroscienceRecording

A geometric framework to predict structure from function in neural networks

James Fitzgerald
Janelia Research Campus
Feb 3, 2021

The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 25, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

network structure coverage

8 items

Seminar8
Domain spotlight

Explore how network structure research is advancing inside Neuro.

Visit domain