network structure
Latest
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Neural networks in the replica-mean field limits
In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.
A parsimonious description of global functional brain organization in three spatiotemporal patterns
Resting-state functional magnetic resonance imaging (MRI) has yielded seemingly disparate insights into large-scale organization of the human brain. The brain’s large-scale organization can be divided into two broad categories: zero-lag representations of functional connectivity structure and time-lag representations of traveling wave or propagation structure. In this study, we sought to unify observed phenomena across these two categories in the form of three low-frequency spatiotemporal patterns composed of a mixture of standing and traveling wave dynamics. We showed that a range of empirical phenomena, including functional connectivity gradients, the task-positive/task-negative anti-correlation pattern, the global signal, time-lag propagation patterns, the quasiperiodic pattern and the functional connectome network structure, are manifestations of these three spatiotemporal patterns. These patterns account for much of the global spatial structure that underlies functional connectivity analyses and unifies phenomena in resting-state functional MRI previously thought distinct.
Power in Network Structures
We consider new measures of centrality in networks which take into account parameters of nodes and group influence of nodes to nodes. Several examples are discussed.
Designing temporal networks that synchronize under resource constraints
Being fundamentally a non-equilibrium process, synchronization comes with unavoidable energy costs and has to be maintained under the constraint of limited resources. Such resource constraints are often reflected as a finite coupling budget available in a network to facilitate interaction and communication. In this talk, I will show that introducing temporal variation in the network structure can lead to efficient synchronization even when stable synchrony is impossible in any static network under the given budget. Our strategy is based on an open-loop control scheme and alludes to a fundamental advantage of temporal networks. Whether this advantage of temporality can be utilized in the brain is an interesting open question.
Bidirectionally connected cores in a mouse connectome: Towards extracting the brain subnetworks essential for consciousness
Where in the brain consciousness resides remains unclear. It has been suggested that the subnetworks supporting consciousness should be bidirectionally (recurrently) connected because both feed-forward and feedback processing are necessary for conscious experience. Accordingly, evaluating which subnetworks are bidirectionally connected and the strength of these connections would likely aid the identification of regions essential to consciousness. Here, we propose a method for hierarchically decomposing a network into cores with different strengths of bidirectional connection, as a means of revealing the structure of the complex brain network. We applied the method to a whole-brain mouse connectome. We found that cores with strong bidirectional connections consisted of regions presumably essential to consciousness (e.g., the isocortical and thalamic regions, and claustrum) and did not include regions presumably irrelevant to consciousness (e.g., cerebellum). Contrarily, we could not find such correspondence between cores and consciousness when we applied other simple methods which ignored bidirectionality. These findings suggest that our method provides a novel insight into the relation between bidirectional brain network structures and consciousness. Our recent preprint on this work is here: https://doi.org/10.1101/2021.07.12.452022.
A geometric framework to predict structure from function in neural networks
The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.
Using noise to probe recurrent neural network structure and prune synapses
Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.
network structure coverage
8 items