← Back

Theoretical Work

Topic spotlight
TopicWorld Wide

theoretical work

Discover seminars, jobs, and research tagged with theoretical work across World Wide.
18 curated items18 Seminars
Updated over 2 years ago
18 items · theoretical work
18 results
SeminarNeuroscienceRecording

Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness

Sharon Gilad-Gutnick
MIT
Jun 19, 2023

Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.

SeminarNeuroscience

Relations and Predictions in Brains and Machines

Kim Stachenfeld
Deepmind
Apr 6, 2023

Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.

SeminarNeuroscienceRecording

Dynamics of cortical circuits: underlying mechanisms and computational implications

Alessandro Sanzeni
Bocconi University, Milano
Jan 24, 2023

A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.

SeminarNeuroscience

Setting network states via the dynamics of action potential generation

Susanne Schreiber
Humboldt University Berlin, Germany
Oct 4, 2022

To understand neural computation and the dynamics in the brain, we usually focus on the connectivity among neurons. In contrast, the properties of single neurons are often thought to be negligible, at least as far as the activity of networks is concerned. In this talk, I will contradict this notion and demonstrate how the biophysics of action-potential generation can have a decisive impact on network behaviour. Our recent theoretical work shows that, among regularly firing neurons, the somewhat unattended homoclinic type (characterized by a spike onset via a saddle homoclinic orbit bifurcation) particularly stands out: First, spikes of this type foster specific network states - synchronization in inhibitory and splayed-out/frustrated states in excitatory networks. Second, homoclinic spikes can easily be induced by changes in a variety of physiological parameters (like temperature, extracellular potassium, or dendritic morphology). As a consequence, such parameter changes can even induce switches in network states, solely based on a modification of cellular voltage dynamics. I will provide first experimental evidence and discuss functional consequences of homoclinic spikes for the design of efficient pattern-generating motor circuits in insects as well as for mammalian pathologies like febrile seizures. Our analysis predicts an interesting role for homoclinic action potentials as an integral part of brain dynamics in both health and disease.

SeminarNeuroscienceRecording

Time as a continuous dimension in natural and artificial networks

Marc Howard
Boston University
May 3, 2022

Neural representations of time are central to our understanding of the world around us. I review cognitive, neurophysiological and theoretical work that converges on three simple ideas. First, the time of past events is remembered via populations of neurons with a continuum of functional time constants. Second, these time constants evenly tile the log time axis. This results in a neural Weber-Fechner scale for time which can support behavioral Weber-Fechner laws and characteristic behavioral effects in memory experiments. Third, these populations appear as dual pairs---one type of population contains cells that change firing rate monotonically over time and a second type of population that has circumscribed temporal receptive fields. These ideas can be used to build artificial neural networks that have novel properties. Of particular interest, a convolutional neural network built using these principles can generalize to arbitrary rescaling of its inputs. That is, after learning to perform a classification task on a time series presented at one speed, it successfully classifies stimuli presented slowed down or sped up. This result illustrates the point that this confluence of ideas originating in cognitive psychology and measured in the mammalian brain could have wide-reaching impacts on AI research.

SeminarNeuroscienceRecording

Does human perception rely on probabilistic message passing?

Alex Hyafil
CRM, Barcelona
Dec 21, 2021

The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.

SeminarNeuroscienceRecording

NMC4 Short Talk: Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony

Scott Rich
Kremibl Brain Institute
Nov 30, 2021

A myriad of pathological changes associated with epilepsy, including the loss of specific cell types, improper expression of individual ion channels, and synaptic sprouting, can be recast as decreases in cell and circuit heterogeneity. In recent experimental work, we demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells, and past theoretical work has shown that neuronal heterogeneity improves a neural circuit’s ability to encode information. Viewed alongside the fact that seizure is an information-poor brain state, these findings motivate the hypothesis that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to seizure onset. By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We directly implement experimentally-obtained heterogeneity levels in cortical excitatory-inhibitory (E-I) stochastic spiking network models. Low heterogeneity networks display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis. Mean-field analysis reveals a distinct mathematical structure in these networks distinguished by multi-stability. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input-output response functions explains the counter-intuitive experimentally observed reduction in single-cell excitability in epileptogenic neurons. This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in mathematically characterized effects of heterogeneity. These interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure and a new lens through which to view epilepsy, the most common serious neurological disorder in the world, that could reveal new targets for clinical treatment.

SeminarNeuroscience

Homeostatic structural plasticity of neuronal connectivity triggered by optogenetic stimulation

Han Lu
Vlachos lab, University of Freiburg, Germany
Nov 24, 2021

Ever since Bliss and Lømo discovered the phenomenon of long-term potentiation (LTP) in rabbit dentate gyrus in the 1960s, Hebb’s rule—neurons that fire together wire together—gained popularity to explain learning and memory. Accumulating evidence, however, suggests that neural activity is homeostatically regulated. Homeostatic mechanisms are mostly interpreted to stabilize network dynamics. However, recent theoretical work has shown that linking the activity of a neuron to its connectivity within the network provides a robust alternative implementation of Hebb’s rule, although entirely based on negative feedback. In this setting, both natural and artificial stimulation of neurons can robustly trigger network rewiring. We used computational models of plastic networks to simulate the complex temporal dynamics of network rewiring in response to external stimuli. In parallel, we performed optogenetic stimulation experiments in the mouse anterior cingulate cortex (ACC) and subsequently analyzed the temporal profile of morphological changes in the stimulated tissue. Our results suggest that the new theoretical framework combining neural activity homeostasis and structural plasticity provides a consistent explanation of our experimental observations.

SeminarNeuroscienceRecording

Design principles of adaptable neural codes

Ann Hermundstad
Janelia
Nov 18, 2021

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

SeminarNeuroscienceRecording

A theory for Hebbian learning in recurrent E-I networks

Samuel Eckmann
Gjorgjieva lab, Max Planck Institute for Brain Research, Frankfurt, Germany
May 19, 2021

The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons with a supralinear input-output relation. It can explain cortical computations such as response normalization and inhibitory stabilization. However, the network's connectivity is designed by hand, based on experimental measurements. How the recurrent synaptic weights can be learned from the sensory input statistics in a biologically plausible way is unknown. Earlier theoretical work on plasticity focused on single neurons and the balance of excitation and inhibition but did not consider the simultaneous plasticity of recurrent synapses and the formation of receptive fields. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic, and E neurons self-stabilize by recruiting co-tuned inhibition. Motivated by experimental results, we employ a local Hebbian plasticity rule with multiplicative normalization for E and I synapses. We develop a theoretical framework that explains how plasticity enables inhibition balanced excitatory receptive fields that match experimental results. We show analytically that sufficiently strong inhibition allows neurons' receptive fields to decorrelate and distribute themselves across the stimulus space. For strong recurrent excitation, the network becomes stabilized by inhibition, which prevents unconstrained self-excitation. In this regime, external inputs integrate sublinearly. As in the Stabilized Supralinear Network, this results in response normalization and winner-takes-all dynamics: when two competing stimuli are presented, the network response is dominated by the stronger stimulus while the weaker stimulus is suppressed. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.

SeminarNeuroscienceRecording

Design principles of adaptable neural codes

Ann Hermunstad
Janelia Research Campus
May 4, 2021

Behavior relies on the ability of sensory systems to infer changing properties of the environment from incoming sensory stimuli. However, the demands that detecting and adjusting to changes in the environment place on a sensory system often differ from the demands associated with performing a specific behavioral task. This necessitates neural coding strategies that can dynamically balance these conflicting needs. I will discuss our ongoing theoretical work to understand how this balance can best be achieved. We connect ideas from efficient coding and Bayesian inference to ask how sensory systems should dynamically allocate limited resources when the goal is to optimally infer changing latent states of the environment, rather than reconstruct incoming stimuli. We use these ideas to explore dynamic tradeoffs between the efficiency and speed of sensory adaptation schemes, and the downstream computations that these schemes might support. Finally, we derive families of codes that balance these competing objectives, and we demonstrate their close match to experimentally-observed neural dynamics during sensory adaptation. These results provide a unifying perspective on adaptive neural dynamics across a range of sensory systems, environments, and sensory tasks.

SeminarPhysics of LifeRecording

Mixed active-passive suspensions: from particle entrainment to spontaneous demixing

Marco Polin
University Warwick
Feb 16, 2021

Understanding the properties of active matter is a challenge which is currently driving a rapid growth in soft- and bio-physics. Some of the most important examples of active matter are at the microscale, and include active colloids and suspensions of microorganisms, both as a simple active fluid (single species) and as mixed suspensions of active and passive elements. In this last class of systems, recent experimental and theoretical work has started to provide a window into new phenomena including activity-induced depletion interactions, phase separation, and the possibility to extract net work from active suspensions. Here I will present our work on a paradigmatic example of mixed active-passive system, where the activity is provided by swimming microalgae. Macro- and micro-scopic experiments reveal that microorganism-colloid interactions are dominated by rare close encounters leading to large displacements through direct entrainment. Simulations and theoretical modelling show that the ensuing particle dynamics can be understood in terms of a simple jump-diffusion process, combining standard diffusion with Poisson-distributed jumps. Entrainment length can be understood within the framework of Taylor dispersion as a competition between advection by the no-slip surface of the cell body and microparticle diffusion. Building on these results, we then ask how external control of the dynamics of the active component (e.g. induced microswimmer anisotropy/inhomogeneity) can be used to alter the transport of passive cargo. As a first step in this direction, we study the behaviour of mixed active-passive systems in confinement. The resulting spatial inhomogeneity in swimmers’ distribution and orientation has a dramatic effect on the spatial distribution of passive particles, with the colloids accumulating either towards the boundaries or towards the bulk of the sample depending on the size of the container. We show that this can be used to induce the system to de-mix spontaneously.

SeminarNeuroscienceRecording

Linking neural representations of space by multiple attractor networks in the entorhinal cortex and the hippocampus

Yoram Burak
Hebrew University
Dec 8, 2020

In the past decade evidence has accumulated in favor of the hypothesis that multiple sub-networks in the medial entorhinal cortex (MEC) are characterized by low-dimensional, continuous attractor dynamics. Much has been learned about the joint activity of grid cells within a module (a module consists of grid cells that share a common grid spacing), but little is known about the interactions between them. Under typical conditions of spatial exploration in which sensory cues are abundant, all grid-cells in the MEC represent the animal’s position in space and their joint activity lies on a two-dimensional manifold. However, if the grid cells in a single module mechanistically constitute independent attractor networks, then under conditions in which salient sensory cues are absent, errors could accumulate in the different modules in an uncoordinated manner. Such uncoordinated errors would give rise to catastrophic readout errors when attempting to decode position from the joint grid-cell activity. I will discuss recent theoretical works from our group, in which we explored different mechanisms that could impose coordination in the different modules. One of these mechanisms involves coordination with the hippocampus and must be set up such that it operates across multiple spatial maps that represent different environments. The other mechanism is internal to the entorhinal cortex and independent of the hippocampus.

SeminarNeuroscience

Theory of gating in recurrent neural networks

Kamesh Krishnamurthy
Princeton University
Sep 15, 2020

Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) for processing sequential data, and also in neuroscience, to understand the emergent properties of networks of real neurons. Prior theoretical work in understanding the properties of RNNs has focused on models with additive interactions. However, real neurons can have gating i.e. multiplicative interactions, and gating is also a central feature of the best performing RNNs in machine learning. Here, we develop a dynamical mean-field theory (DMFT) to study the consequences of gating in RNNs. We use random matrix theory to show how gating robustly produces marginal stability and line attractors – important mechanisms for biologically-relevant computations requiring long memory. The long-time behavior of the gated network is studied using its Lyapunov spectrum, and the DMFT is used to provide a novel analytical expression for the maximum Lyapunov exponent demonstrating its close relation to relaxation-time of the dynamics. Gating is also shown to give rise to a novel, discontinuous transition to chaos, where the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity), contrary to a seminal result for additive RNNs. Critical surfaces and regions of marginal stability in the parameter space are indicated in phase diagrams, thus providing a map for principled parameter choices for ML practitioners. Finally, we develop a field-theory for gradients that arise in training, by incorporating the adjoint sensitivity framework from control theory in the DMFT. This paves the way for the use of powerful field-theoretic techniques to study training/gradients in large RNNs.