← Back

Variance

Topic spotlight
TopicWorld Wide

variance

Discover seminars, jobs, and research tagged with variance across World Wide.
39 curated items22 Seminars17 ePosters
Updated 2 months ago
39 items · variance
39 results
SeminarNeuroscience

AutoMIND: Deep inverse models for revealing neural circuit invariances

Richard Gao
Goethe University
Oct 1, 2025
SeminarNeuroscienceRecording

Characterizing the causal role of large-scale network interactions in supporting complex cognition

Michal Ramot
Weizmann Inst. of Science
May 6, 2024

Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.

SeminarNeuroscience

Euclidean coordinates are the wrong prior for primate vision

Gary Cottrell
University of California, San Diego (UCSD)
May 9, 2023

The mapping from the visual field to V1 can be approximated by a log-polar transform. In this domain, scale is a left-right shift, and rotation is an up-down shift. When fed into a standard shift-invariant convolutional network, this provides scale and rotation invariance. However, translation invariance is lost. In our model, this is compensated for by multiple fixations on an object. Due to the high concentration of cones in the fovea with the dropoff of resolution in the periphery, fully 10 degrees of visual angle take up about half of V1, with the remaining 170 degrees (or so) taking up the other half. This layout provides the basis for the central and peripheral pathways. Simulations with this model closely match human performance in scene classification, and competition between the pathways leads to the peripheral pathway being used for this task. Remarkably, in spite of the property of rotation invariance, this model can explain the inverted face effect. We suggest that the standard method of using image coordinates is the wrong prior for models of primate vision.

SeminarNeuroscienceRecording

The strongly recurrent regime of cortical networks

David Dahmen
Jülich Research Centre, Germany
Mar 28, 2023

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.

SeminarNeuroscienceRecording

Network inference via process motifs for lagged correlation in linear stochastic processes

Alice Schwarze
Dartmouth College
Nov 16, 2022

A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.

SeminarNeuroscienceRecording

Building System Models of Brain-Like Visual Intelligence with Brain-Score

Martin Schrimpf
MIT
Oct 4, 2022

Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.

SeminarNeuroscience

Multi-modal biomarkers improve prediction of memory function in cognitively unimpaired older adults

Alexandra N. Trelle
Stanford
Mar 21, 2022

Identifying biomarkers that predict current and future cognition may improve estimates of Alzheimer’s disease risk among cognitively unimpaired older adults (CU). In vivo measures of amyloid and tau protein burden and task-based functional MRI measures of core memory mechanisms, such as the strength of cortical reinstatement during remembering, have each been linked to individual differences in memory in CU. This study assesses whether combining CSF biomarkers with fMRI indices of cortical reinstatement improves estimation of memory function in CU, assayed using three unique tests of hippocampal-dependent memory. Participants were 158 CU (90F, aged 60-88 years, CDR=0) enrolled in the Stanford Aging and Memory Study (SAMS). Cortical reinstatement was quantified using multivoxel pattern analysis of fMRI data collected during completion of a paired associate cued recall task. Memory was assayed by associative cued recall, a delayed recall composite, and a mnemonic discrimination task that involved discrimination between studied ‘target’ objects, novel ‘foil’ objects, and perceptually similar ‘lure’ objects. CSF Aβ42, Aβ40, and p-tau181 were measured with the automated Lumipulse G system (N=115). Regression analyses examined cross-sectional relationships between memory performance in each task and a) the strength of cortical reinstatement in the Default Network (comprised of posterior medial, medial frontal, and lateral parietal regions) during associative cued recall and b) CSF Aβ42/Aβ40 and p-tau181, controlling for age, sex, and education. For mnemonic discrimination, linear mixed effects models were used to examine the relationship between discrimination (d’) and each predictor as a function of target-lure similarity. Stronger cortical reinstatement was associated with better performance across all three memory assays. Age and higher CSF p-tau181 were each associated with poorer associative memory and a diminished improvement in mnemonic discrimination as target-lure similarity decreased. When combined in a single model, CSF p-tau181 and Default Network reinstatement strength, but not age, explained unique variance in associative memory and mnemonic discrimination performance, outperforming the single-modality models. Combining fMRI measures of core memory functions with protein biomarkers of Alzheimer’s disease significantly improved prediction of individual differences in memory performance in CU. Leveraging multimodal biomarkers may enhance future prediction of risk for cognitive decline.

SeminarNeuroscienceRecording

Invariant neural subspaces maintained by feedback modulation

Henning Sprekeler
TU Berlin
Feb 17, 2022

Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 1, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Short Talk: Image embeddings informed by natural language improve predictions and understanding of human higher-level visual cortex

Aria Wang
Carnegie Mellon University
Nov 30, 2021

To better understand human scene understanding, we extracted features from images using CLIP, a neural network model of visual concept trained with supervision from natural language. We then constructed voxelwise encoding models to explain whole brain responses arising from viewing natural images from the Natural Scenes Dataset (NSD) - a large-scale fMRI dataset collected at 7T. Our results reveal that CLIP, as compared to convolution based image classification models such as ResNet or AlexNet, as well as language models such as BERT, gives rise to representations that enable better prediction performance - up to a 0.86 correlation with test data and an r-square of 0.75 - in higher-level visual cortex in humans. Moreover, CLIP representations explain distinctly unique variance in these higher-level visual areas as compared to models trained with only images or text. Control experiments show that the improvement in prediction observed with CLIP is not due to architectural differences (transformer vs. convolution) or to the encoding of image captions per se (vs. single object labels). Together our results indicate that CLIP and, more generally, multimodal models trained jointly on images and text, may serve as better candidate models of representation in human higher-level visual cortex. The bridge between language and vision provided by jointly trained models such as CLIP also opens up new and more semantically-rich ways of interpreting the visual brain.

SeminarNeuroscienceRecording

NMC4 Short Talk: Untangling Contributions of Distinct Features of Images to Object Processing in Inferotemporal Cortex

Hanxiao Lu
Yale University
Nov 30, 2021

How do humans perceive daily objects of various features and categorize these seemingly intuitive and effortless mental representations? Prior literature focusing on the role of the inferotemporal region (IT) has revealed object category clustering that is consistent with the semantic predefined structure (superordinate, ordinate, subordinate). It has however been debated whether the neural signals in the IT regions are a reflection of such categorical hierarchy [Wen et al.,2018; Bracci et al., 2017]. Visual attributes of images that correlated with semantic and category dimensions may have confounded these prior results. Our study aimed to address this debate by building and comparing models using the DNN AlexNet, to explain the variance in representational dissimilarity matrix (RDM) of neural signals in the IT region. We found that mid and high level perceptual attributes of the DNN model contribute the most to neural RDMs in the IT region. Semantic categories, as in predefined structure, were moderately correlated with mid to high DNN layers (r = [0.24 - 0.36]). Variance partitioning analysis also showed that the IT neural representations were mostly explained by DNN layers, while semantic categorical RDMs brought little additional information. In light of these results, we propose future works should focus more on the specific role IT plays in facilitating the extraction and coding of visual features that lead to the emergence of categorical conceptualizations.

SeminarNeuroscienceRecording

NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing

Nick Sexton (he/him)
University College London
Nov 30, 2021

A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.

SeminarNeuroscienceRecording

Suboptimal human inference inverts the bias-variance trade-off for decisions with asymmetric evidence

Tahra Eissa
University of Colorado Boulder
Nov 30, 2021

Solutions to challenging inference problems are often subject to a fundamental trade-off between bias (being systematically wrong) that is minimized with complex inference strategies and variance (being oversensitive to uncertain observations) that is minimized with simple inference strategies. However, this trade-off is based on the assumption that the strategies being considered are optimal for their given complexity and thus has unclear relevance to the frequently suboptimal inference strategies used by humans. We examined inference problems involving rare, asymmetrically available evidence, which a large population of human subjects solved using a diverse set of strategies that were suboptimal relative to the Bayesian ideal observer. These suboptimal strategies reflected an inversion of the classic bias-variance trade-off: subjects who used more complex, but imperfect, Bayesian-like strategies tended to have lower variance but high bias because of incorrect tuning to latent task features, whereas subjects who used simpler heuristic strategies tended to have higher variance because they operated more directly on the observed samples but displayed weaker, near-normative bias. Our results yield new insights into the principles that govern individual differences in behavior that depends on rare-event inference, and, more generally, about the information-processing trade-offs that are sensitive to not just the complexity, but also the optimality of the inference process.

SeminarNeuroscience

The bounded rationality of probability distortion

Laurence T Maloney
NYU
Nov 9, 2021

In decision-making under risk (DMR) participants' choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, why do we systematically fail in our use of probability and relative frequency information? We propose a Bounded Log-Odds Model (BLO) of probability and relative frequency distortion based on three assumptions: (1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, (2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and (3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values. We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as eleven alternative models each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature. We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.

SeminarNeuroscience

Low Dimensional Manifolds for Neural Dynamics

Sara A. Solla
Northwestern University
Jun 8, 2021

The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. As an example, we focus on the ability to execute learned actions in a reliable and stable manner. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. The stable latent dynamics, once identified, allows for the prediction of various behavioral features, using models whose parameters remain fixed throughout long timespans. We posit that latent cortical dynamics within the manifold are the fundamental and stable building blocks underlying consistent behavioral execution.

SeminarNeuroscience

Bridging brain and cognition: A multilayer network analysis of brain structural covariance and general intelligence in a developmental sample of struggling learners

Ivan Simpson-Kent
University of Cambridge, MRC CBU
Jun 1, 2021

Network analytic methods that are ubiquitous in other areas, such as systems neuroscience, have recently been used to test network theories in psychology, including intelligence research. The network or mutualism theory of intelligence proposes that the statistical associations among cognitive abilities (e.g. specific abilities such as vocabulary or memory) stem from causal relations among them throughout development. In this study, we used network models (specifically LASSO) of cognitive abilities and brain structural covariance (grey and white matter) to simultaneously model brain-behavior relationships essential for general intelligence in a large (behavioral, N=805; cortical volume, N=246; fractional anisotropy, N=165), developmental (ages 5-18) cohort of struggling learners (CALM). We found that mostly positive, small partial correlations pervade both our cognitive and neural networks. Moreover, calculating node centrality (absolute strength and bridge strength) and using two separate community detection algorithms (Walktrap and Clique Percolation), we found convergent evidence that subsets of both cognitive and neural nodes play an intermediary role between brain and behavior. We discuss implications and possible avenues for future studies.

SeminarNeuroscience

Global AND Scale-Free? Spontaneous cortical dynamics between functional networks and cortico-hippocampal communication

Federico Stella
Battaglia lab, Donders Institute
Jan 26, 2021

Recent advancements in anatomical and functional imaging emphasize the presence of whole-brain networks organized according to functional and connectivity gradients, but how such structure shapes activity propagation and memory processes still lacks asatisfactory model. We analyse the fine-grained spatiotemporal dynamics of spontaneous activity in the entire dorsal cortex. through simultaneous recordings of wide-field voltage sensitive dye transients (VS), cortical ECoG, and hippocampal LFP in anesthetized mice. Both VS and ECoG show cortical avalanches. When measuring avalanches from the VS signal, we find a major deviation of the size scaling from the power-law distribution predicted by the criticality hypothesis and well approximated by the results from the ECoG. Breaking from scale-invariance, avalanches can thus be grouped in two regimes. Small avalanches consists of a limited number of co-activation modes involving a sub-set of cortical networks (related to the Default Mode Network), while larger avalanches involve a substantial portion of the cortical surface and can be clustered into two families: one immediately preceded by Retrosplenial Cortex activation and mostly involving medial-posterior networks, the other initiated by Somatosensory Cortex and extending preferentially along the lateral-anterior region. Rather than only differing in terms of size, these two set of events appear to be associated with markedly different brain-wide dynamical states: they are accompaniedby a shift in the hippocampal LFP, from the ripple band (smaller) to the gamma band (larger avalanches), and correspond to opposite directionality in the cortex-to-hippocampus causal relationship. These results provide a concrete description of global cortical dynamics, and shows how cortex in its entirety is involved in bi-directional communication in the hippocampus even in sleep-like states.

SeminarNeuroscienceRecording

The emergence of contrast invariance in cortical circuits

Tatjana Tchumatchenko
Max Planck Institute for Brain Research
Nov 12, 2020

Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 24, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

ePoster

Effective excitability: a determinant of the network bursting dynamics revealed by parameter invariance

Oleg Vinogradov, Emmanouil Giannakakis, Betül Uysal, Shlomo Ron, Eyal Weinreb, Holger Lerche, Elisha Moses, Anna Levina

Bernstein Conference 2024

ePoster

Deep inverse modeling reveals dynamic-dependent invariances in neural circuits mechanisms

Richard Gao, Michael Deistler, Auguste Schulz, Pedro Gonçalves, Jakob Macke

Bernstein Conference 2024

ePoster

Approximate gradient descent and the brain: the role of bias and variance

COSYNE 2022

ePoster

Inception loops reveal novel spatially-localized phase invariance in mouse primary visual cortex

COSYNE 2022

ePoster

Inception loops reveal novel spatially-localized phase invariance in mouse primary visual cortex

COSYNE 2022

ePoster

A synaptic plasticity rule based on presynaptic variance to infer input reliability

COSYNE 2022

ePoster

A synaptic plasticity rule based on presynaptic variance to infer input reliability

COSYNE 2022

ePoster

A two-way luminance gain control in the fly brain ensures luminance invariance in dynamic vision

COSYNE 2022

ePoster

A two-way luminance gain control in the fly brain ensures luminance invariance in dynamic vision

COSYNE 2022

ePoster

The scale-invariant covariance spectrum of brain-wide activity in larval zebrafish

Zezhen Wang, Weihao Mai, Yuming Chai, Chen Shen, Kexin Qi, Yu Hu, Quan Wen

COSYNE 2023

ePoster

Variance-limited scaling laws for plausible learning rules

Alexander Atanasov, Blake Bordelon, Cengiz Pehlevan

COSYNE 2023

ePoster

Changes in tuning curves, not neural population covariance, improve category separability in the primate ventral visual pathway

Jenelle Feather, Long Sha, Gouki Okazawa, Nga Yu Lo, SueYeon Chung, Roozbeh Kiani

COSYNE 2025

ePoster

Covariance spectrum in nonlinear recurrent neural networks and transition to chaos

Xuanyu Shen, Yu Hu

COSYNE 2025

ePoster

Deep inverse modeling reveals dynamic-dependent invariances in neural circuit mechanisms

Richard Gao, Michael Deistler, Auguste Schulz, Pedro Goncalves, Jakob Macke

COSYNE 2025

ePoster

Clustering visual sensory neurons based on their invariances

Mohammad Bashiri, Luca Baroni, Saumil Patel, Andreas S. Tolias, Ján Antolík, Fabian Sinz

FENS Forum 2024

ePoster

Reduction of inter-individual variance in functional magnetic resonance imaging improves the prediction of individual pain ratings

Ole Goltermann, Christian Büchel

FENS Forum 2024

ePoster

Structural covariance & graph-learning for the individualized classification of schizophrenia patients

Clara Vetter

Neuromatch 5