← Back

Network Models

Topic spotlight
TopicWorld Wide

network models

Discover seminars, jobs, and research tagged with network models across World Wide.
65 curated items47 Seminars16 ePosters2 Positions
Updated 2 days ago
65 items · network models
65 results
Position

Maxime Carrière

Freie Universität Berlin
Berlin, Germany
Dec 5, 2025

The ERC Advanced Grant “Material Constraints Enabling Human Cognition (MatCo)” at the Freie Universität Berlin aims to build network models of the human brain that mimic neurocognitive processes involved in language, communication and cognition. A main strategy is to use neural network models constrained by neuroanatomical and neurophysiological features of the human brain in order to explain aspects of human cognition. To this end, neural network simulations are performed and evaluated in neurophysiological and neurometabolic experiments. This neurocomputational and experimental research targets novel explanations of human language and cognition on the basis of neurobiological principles. In the MatCo project, 3 positions are currently available: 1 full time position for a Scientific Researcher at the postdoctoral level Fixed-term (until 30.9.2025), Salary Scale 13 TV-L FU ID: WiMi_MatCo100_08-2022, 2 part time positions (65%) for Scientific Researchers at the predoctoral level Fixed-term (until 30.9.2025), Salary Scale 13 TV-L FU ID: WiMi_MatCo65_08-2022

PositionComputational Neuroscience

Dr Margarita Zachariou

The Cyprus Institute of Neurology and Genetics
Nicosia, Cyprus
Dec 5, 2025

We are looking for a Post-Doctoral Fellow and/or a Laboratory Scientific Officer(research assistant) to join the Bioinformatics Department of the Cyprus Institute of Neurology and Genetics. The team focuses on computational neuroscience, particularly on (1) building biophysical models of neurons and neuronal networks to study neurological diseases and (2) developing state-of-the-art analysis pipelines for neural data across scales, focusing on disease-specific patterns and integrating diverse data modalities. The successful candidate(s) will be working on multiscale models of magnetoelectric and ultrasonic effects on neuronal dynamics as part of the EU-Horizon funded META-BRAIN (https://meta-brain.eu).

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 28, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Brain-Wide Compositionality and Learning Dynamics in Biological Agents

Kanaka Rajan
Harvard Medical School
Nov 12, 2024

Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.

SeminarNeuroscience

Roles of inhibition in stabilizing and shaping the response of cortical networks

Nicolas Brunel
Duke University
Apr 4, 2024

Inhibition has long been thought to stabilize the activity of cortical networks at low rates, and to shape significantly their response to sensory inputs. In this talk, I will describe three recent collaborative projects that shed light on these issues. (1) I will show how optogenetic excitation of inhibition neurons is consistent with cortex being inhibition stabilized even in the absence of sensory inputs, and how this data can constrain the coupling strengths of E-I cortical network models. (2) Recent analysis of the effects of optogenetic excitation of pyramidal cells in V1 of mice and monkeys shows that in some cases this optogenetic input reshuffles the firing rates of neurons of the network, leaving the distribution of rates unaffected. I will show how this surprising effect can be reproduced in sufficiently strongly coupled E-I networks. (3) Another puzzle has been to understand the respective roles of different inhibitory subtypes in network stabilization. Recent data reveal a novel, state dependent, paradoxical effect of weakening AMPAR mediated synaptic currents onto SST cells. Mathematical analysis of a network model with multiple inhibitory cell types shows that this effect tells us in which conditions SST cells are required for network stabilization.

SeminarNeuroscienceRecording

Virtual Brain Twins for Brain Medicine and Epilepsy

Viktor Jirsa
Aix Marseille Université - Inserm
Nov 7, 2023

Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.

SeminarNeuroscienceRecording

Dynamics of cortical circuits: underlying mechanisms and computational implications

Alessandro Sanzeni
Bocconi University, Milano
Jan 24, 2023

A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.

SeminarNeuroscienceRecording

Bridging the gap between artificial models and cortical circuits

C. B. Currin
IST Austria
Nov 9, 2022

Artificial neural networks simplify complex biological circuits into tractable models for computational exploration and experimentation. However, the simplification of artificial models also undermines their applicability to real brain dynamics. Typical efforts to address this mismatch add complexity to increasingly unwieldy models. Here, we take a different approach; by reducing the complexity of a biological cortical culture, we aim to distil the essential factors of neuronal dynamics and plasticity. We leverage recent advances in growing neurons from human induced pluripotent stem cells (hiPSCs) to analyse ex vivo cortical cultures with only two distinct excitatory and inhibitory neuron populations. Over 6 weeks of development, we record from thousands of neurons using high-density microelectrode arrays (HD-MEAs) that allow access to individual neurons and the broader population dynamics. We compare these dynamics to two-population artificial networks of single-compartment neurons with random sparse connections and show that they produce similar dynamics. Specifically, our model captures the firing and bursting statistics of the cultures. Moreover, tightly integrating models and cultures allows us to evaluate the impact of changing architectures over weeks of development, with and without external stimuli. Broadly, the use of simplified cortical cultures enables us to use the repertoire of theoretical neuroscience techniques established over the past decades on artificial network models. Our approach of deriving neural networks from human cells also allows us, for the first time, to directly compare neural dynamics of disease and control. We found that cultures e.g. from epilepsy patients tended to have increasingly more avalanches of synchronous activity over weeks of development, in contrast to the control cultures. Next, we will test possible interventions, in silico and in vitro, in a drive for personalised approaches to medical care. This work starts bridging an important theoretical-experimental neuroscience gap for advancing our understanding of mammalian neuron dynamics.

SeminarNeuroscience

Towards multi-system network models for cognitive neuroscience

Robert Guangyu Yang
MIT
Oct 13, 2022

Artificial neural networks can be useful for studying brain functions. In cognitive neuroscience, recurrent neural networks are often used to model cognitive functions. I will first offer my opinion on what is missing in the classical use of recurrent neural networks. Then I will discuss two lines of ongoing efforts in our group to move beyond the classical recurrent neural networks by studying multi-system neural networks (the talk will focus on two-system networks). These are networks that combine modules for several neural systems, such as vision, audition, prefrontal, hippocampal systems. I will showcase how multi-system networks can potentially be constrained by experimental data in fundamental ways and at scale.

SeminarNeuroscienceRecording

A Game Theoretical Framework for Quantifying​ Causes in Neural Networks

Kayson Fakhar​
ICNS Hamburg
Jul 5, 2022

Which nodes in a brain network causally influence one another, and how do such interactions utilize the underlying structural connectivity? One of the fundamental goals of neuroscience is to pinpoint such causal relations. Conventionally, these relationships are established by manipulating a node while tracking changes in another node. A causal role is then assigned to the first node if this intervention led to a significant change in the state of the tracked node. In this presentation, I use a series of intuitive thought experiments to demonstrate the methodological shortcomings of the current ‘causation via manipulation’ framework. Namely, a node might causally influence another node, but how much and through which mechanistic interactions? Therefore, establishing a causal relationship, however reliable, does not provide the proper causal understanding of the system, because there often exists a wide range of causal influences that require to be adequately decomposed. To do so, I introduce a game-theoretical framework called Multi-perturbation Shapley value Analysis (MSA). Then, I present our work in which we employed MSA on an Echo State Network (ESN), quantified how much its nodes were influencing each other, and compared these measures with the underlying synaptic strength. We found that: 1. Even though the network itself was sparse, every node could causally influence other nodes. In this case, a mere elucidation of causal relationships did not provide any useful information. 2. Additionally, the full knowledge of the structural connectome did not provide a complete causal picture of the system either, since nodes frequently influenced each other indirectly, that is, via other intermediate nodes. Our results show that just elucidating causal contributions in complex networks such as the brain is not sufficient to draw mechanistic conclusions. Moreover, quantifying causal interactions requires a systematic and extensive manipulation framework. The framework put forward here benefits from employing neural network models, and in turn, provides explainability for them.

SeminarNeuroscience

Multiscale modeling of brain states, from spiking networks to the whole brain

Alain Destexhe
Centre National de la Recherche Scientifique and Paris-Saclay University
Apr 5, 2022

Modeling brain mechanisms is often confined to a given scale, such as single-cell models, network models or whole-brain models, and it is often difficult to relate these models. Here, we show an approach to build models across scales, starting from the level of circuits to the whole brain. The key is the design of accurate population models derived from biophysical models of networks of excitatory and inhibitory neurons, using mean-field techniques. Such population models can be later integrated as units in large-scale networks defining entire brain areas or the whole brain. We illustrate this approach by the simulation of asynchronous and slow-wave states, from circuits to the whole brain. At the mesoscale (millimeters), these models account for travelling activity waves in cortex, and at the macroscale (centimeters), the models reproduce the synchrony of slow waves and their responsiveness to external stimuli. This approach can also be used to evaluate the impact of sub-cellular parameters, such as receptor types or membrane conductances, on the emergent behavior at the whole-brain level. This is illustrated with simulations of the effect of anesthetics. The program codes are open source and run in open-access platforms (such as EBRAINS).

SeminarPhysics of LifeRecording

Making a Mesh of Things: Using Network Models to Understand the Mechanics of Heterogeneous Tissues

Jonathan Michel
Rochester Institute of Technology
Apr 3, 2022

Networks of stiff biopolymers are an omnipresent structural motif in cells and tissues. A prominent modeling framework for describing biopolymer network mechanics is rigidity percolation theory. This theory describes model networks as nodes joined by randomly placed, springlike bonds. Increasing the amount of bonds in a network results in an abrupt, dramatic increase in elastic moduli above a certain threshold – an example of a mechanical phase transition. While homogeneous networks are well studied, many tissues are made of disparate components and exhibit spatial fluctuations in the concentrations of their constituents. In this talk, I will first discuss recent work in which we explained the structural basis of the shear mechanics of healthy and chemically degraded cartilage by coupling a rigidity percolation framework with a background gel. Our model takes into account collagen concentration, as well as the concentration of peptidoglycans in the surrounding polyelectrolyte gel, to produce a structureproperty relationship that describes the shear mechanics of both sound and diseased cartilage. I will next discuss the introduction of structural correlation in constructing networks, such that sparse and dense patches emerge. I find moderate correlation allows a network to become rigid with fewer bonds, while this benefit is partly erased by excessive correlation. We explain this phenomenon through analysis of the spatial fluctuations in strained networks’ displacement fields. Finally, I will address our work’s implications for non-invasive diagnosis of pathology, as well as rational design of prostheses and novel soft materials.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 22, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscienceRecording

Frontal circuit specialisations for information search and decision making

Laurence Hunt
Oxford University
Jan 27, 2022

During primate evolution, prefrontal cortex (PFC) expanded substantially relative to other cortical areas. The expansion of PFC circuits likely supported the increased cognitive abilities of humans and anthropoids to sample information about their environment, evaluate that information, plan, and decide between different courses of action. What quantities do these circuits compute as information is being sampled towards and a decision is being made? And how can they be related to anatomical specialisations within and across PFC? To address this, we recorded PFC activity during value-based decision making using single unit recording in non-human primates and magnetoencephalography in humans. At a macrocircuit level, we found that value correlates differ substantially across PFC subregions. They are heavily shaped by each subregion’s anatomical connections and by the decision-maker’s current locus of attention. At a microcircuit level, we found that the temporal evolution of value correlates can be predicted using cortical recurrent network models that temporally integrate incoming decision evidence. These models reflect the fact that PFC circuits are highly recurrent in nature and have synaptic properties that support persistent activity across temporally extended cognitive tasks. Our findings build upon recent work describing economic decision making as a process of attention-weighted evidence integration across time.

SeminarNeuroscience

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Jan 23, 2022

Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

SeminarNeuroscienceRecording

Theory of recurrent neural networks – from parameter inference to intrinsic timescales in spiking networks

Alexander van Meegen
Forschungszentrum Jülich
Jan 12, 2022
SeminarNeuroscience

A nonlinear shot noise model for calcium-based synaptic plasticity

Bin Wang
Aljadeff lab, University of California San Diego, USA
Dec 8, 2021

Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 1, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity

Meenakshi Khosla
Massachusetts Institute of Technology
Nov 30, 2021

Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.

SeminarNeuroscienceRecording

NMC4 Short Talk: Synchronization in the Connectome: Metastable oscillatory modes emerge from interactions in the brain spacetime network

Francesca Castaldo
University College London
Nov 30, 2021

The brain exhibits a rich repertoire of oscillatory patterns organized in space, time and frequency. However, despite ever more-detailed characterizations of spectrally-resolved network patterns, the principles governing oscillatory activity at the system-level remain unclear. Here, we propose that the transient emergence of spatially organized brain rhythms are signatures of weakly stable synchronization between subsets of brain areas, naturally occurring at reduced collective frequencies due to the presence of time delays. To test this mechanism, we build a reduced network model representing interactions between local neuronal populations (with damped oscillatory response at 40Hz) coupled in the human neuroanatomical network. Following theoretical predictions, weakly stable cluster synchronization drives a rich repertoire of short-lived (or metastable) oscillatory modes, whose frequency inversely depends on the number of units, the strength of coupling and the propagation times. Despite the significant degree of reduction, we find a range of model parameters where the frequencies of collective oscillations fall in the range of typical brain rhythms, leading to an optimal fit of the power spectra of magnetoencephalographic signals from 89 heathy individuals. These findings provide a mechanistic scenario for the spontaneous emergence of frequency-specific long-range phase-coupling observed in magneto- and electroencephalographic signals as signatures of resonant modes emerging in the space-time structure of the Connectome, reinforcing the importance of incorporating realistic time delays in network models of oscillatory brain activity.

SeminarNeuroscienceRecording

NMC4 Short Talk: Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony

Scott Rich
Kremibl Brain Institute
Nov 30, 2021

A myriad of pathological changes associated with epilepsy, including the loss of specific cell types, improper expression of individual ion channels, and synaptic sprouting, can be recast as decreases in cell and circuit heterogeneity. In recent experimental work, we demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells, and past theoretical work has shown that neuronal heterogeneity improves a neural circuit’s ability to encode information. Viewed alongside the fact that seizure is an information-poor brain state, these findings motivate the hypothesis that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to seizure onset. By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We directly implement experimentally-obtained heterogeneity levels in cortical excitatory-inhibitory (E-I) stochastic spiking network models. Low heterogeneity networks display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis. Mean-field analysis reveals a distinct mathematical structure in these networks distinguished by multi-stability. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input-output response functions explains the counter-intuitive experimentally observed reduction in single-cell excitability in epileptogenic neurons. This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in mathematically characterized effects of heterogeneity. These interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure and a new lens through which to view epilepsy, the most common serious neurological disorder in the world, that could reveal new targets for clinical treatment.

SeminarNeuroscience

Neural network models of binocular depth perception

Paul Hibbard
University of Essex
Nov 30, 2021

Our visual experience of living in a three-dimensional world is created from the information contained in the two-dimensional images projected into our eyes. The overlapping visual fields of the two eyes mean that their images are highly correlated, and that the small differences that are present represent an important cue to depth. Binocular neurons encode this information in a way that both maximises efficiency and optimises disparity tuning for the depth structures that are found in our natural environment. Neural network models provide a clear account of how these binocular neurons encode the local binocular disparity in images. These models can be expanded to multi-layer models that are sensitive to salient features of scenes, such as the orientations and discontinuities between surfaces. These deep neural network models have also shown the importance of binocular disparity for the segmentation of images into separate objects, in addition to the estimation of distance. These results demonstrate the usefulness of machine learning approaches as a tool for understanding biological vision.

SeminarNeuroscienceRecording

Computational Models of Compulsivity

Frederike Petzschner
Brown University
Nov 10, 2021
SeminarNeuroscience

Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models

Tatjana Tchumatcheko
University of Bonn
Nov 9, 2021

The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.

SeminarNeuroscienceRecording

Optimising spiking interneuron circuits for compartment-specific feedback

Henning Sprekeler
Technische Universität Berlin
Nov 1, 2021

Cortical circuits process information by rich recurrent interactions between excitatory neurons and inhibitory interneurons. One of the prime functions of interneurons is to stabilize the circuit by feedback inhibition, but the level of specificity on which inhibitory feedback operates is not fully resolved. We hypothesized that inhibitory circuits could enable separate feedback control loops for different synaptic input streams, by means of specific feedback inhibition to different neuronal compartments. To investigate this hypothesis, we adopted an optimization approach. Leveraging recent advances in training spiking network models, we optimized the connectivity and short-term plasticity of interneuron circuits for compartment-specific feedback inhibition onto pyramidal neurons. Over the course of the optimization, the interneurons diversified into two classes that resembled parvalbumin (PV) and somatostatin (SST) expressing interneurons. The resulting circuit can be understood as a neural decoder that inverts the nonlinear biophysical computations performed within the pyramidal cells. Our model provides a proof of concept for studying structure-function relations in cortical circuits by a combination of gradient-based optimization and biologically plausible phenomenological models

SeminarNeuroscience

Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Neural Dynamics & Computation Lab, Stanford University
Jun 16, 2021

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscience

Towards a neurally mechanistic understanding of visual cognition

Kohitij Kar
Massachusetts Institute of Technology
Jun 13, 2021

I am interested in developing a neurally mechanistic understanding of how primate brains represent the world through its visual system and how such representations enable a remarkable set of intelligent behaviors. In this talk, I will primarily highlight aspects of my current research that focuses on dissecting the brain circuits that support core object recognition behavior (primates’ ability to categorize objects within hundreds of milliseconds) in non-human primates. On the one hand, my work empirically examines how well computational models of the primate ventral visual pathways embed knowledge of the visual brain function (e.g., Bashivan*, Kar*, DiCarlo, Science, 2019). On the other hand, my work has led to various functional and architectural insights that help improve such brain models. For instance, we have exposed the necessity of recurrent computations in primate core object recognition (Kar et al., Nature Neuroscience, 2019), one that is strikingly missing from most feedforward artificial neural network models. Specifically, we have observed that the primate ventral stream requires fast recurrent processing via ventrolateral PFC for robust core object recognition (Kar and DiCarlo, Neuron, 2021). In addition, I have been currently developing various chemogenetic strategies to causally target specific bidirectional neural circuits in the macaque brain during multiple object recognition tasks to further probe their relevance during this behavior. I plan to transform these data and insights into tangible progress in neuroscience via my collaboration with various computational groups and building improved brain models of object recognition. I hope to end the talk with a brief glimpse of some of my planned future work!

SeminarNeuroscience

Bridging brain and cognition: A multilayer network analysis of brain structural covariance and general intelligence in a developmental sample of struggling learners

Ivan Simpson-Kent
University of Cambridge, MRC CBU
Jun 1, 2021

Network analytic methods that are ubiquitous in other areas, such as systems neuroscience, have recently been used to test network theories in psychology, including intelligence research. The network or mutualism theory of intelligence proposes that the statistical associations among cognitive abilities (e.g. specific abilities such as vocabulary or memory) stem from causal relations among them throughout development. In this study, we used network models (specifically LASSO) of cognitive abilities and brain structural covariance (grey and white matter) to simultaneously model brain-behavior relationships essential for general intelligence in a large (behavioral, N=805; cortical volume, N=246; fractional anisotropy, N=165), developmental (ages 5-18) cohort of struggling learners (CALM). We found that mostly positive, small partial correlations pervade both our cognitive and neural networks. Moreover, calculating node centrality (absolute strength and bridge strength) and using two separate community detection algorithms (Walktrap and Clique Percolation), we found convergent evidence that subsets of both cognitive and neural nodes play an intermediary role between brain and behavior. We discuss implications and possible avenues for future studies.

SeminarNeuroscienceRecording

Neuronal variability and spatiotemporal dynamics in cortical network models

Chengcheng Huang
University of Pittsburgh
May 18, 2021

Neuronal variability is a reflection of recurrent circuitry and cellular physiology. The modulation of neuronal variability is a reliable signature of cognitive and processing state. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional with all neurons fluctuating en masse. We show that the spatiotemporal dynamics in a spatially structured network produce large population-wide shared variability. When the spatial and temporal scales of inhibitory coupling match known physiology, model spiking neurons naturally generate low dimensional shared variability that captures in vivo population recordings along the visual pathway. Further, we show that firing rate models with spatial coupling can also generate chaotic and low-dimensional rate dynamics. The chaotic parameter region expands when the network is driven by correlated noisy inputs, while being insensitive to the intensity of independent noise.

SeminarNeuroscienceRecording

Finding the needle in the haystack – Functional circuit and network models for neuroscience

Friedemann Zenke
Friedrich Miescher Institute for Biomedical Research, Basel
May 11, 2021

Start of the talk will be 17:15h (CEST). This session is a double feature of the Cologne Theoretical Neuroscience Forum and the BCCN Berlin.

SeminarNeuroscienceRecording

How Brain Circuits Function in Health and Disease: Understanding Brain-wide Current Flow

Kanaka Rajan
Icahn School of Medicine at Mount Sinai, New York
Apr 13, 2021

Dr. Rajan and her lab design neural network models based on experimental data, and reverse-engineer them to figure out how brain circuits function in health and disease. They recently developed a powerful framework for tracing neural paths across multiple brain regions— called Current-Based Decomposition (CURBD). This new approach enables the computation of excitatory and inhibitory input currents that drive a given neuron, aiding in the discovery of how entire populations of neurons behave across multiple interacting brain regions. Dr. Rajan’s team has applied this method to studying the neural underpinnings of behavior. As an example, when CURBD was applied to data gathered from an animal model often used to study depression- and anxiety-like behaviors (i.e., learned helplessness) the underlying biology driving adaptive and maladaptive behaviors in the face of stress was revealed. With this framework Dr. Rajan's team probes for mechanisms at work across brain regions that support both healthy and disease states-- as well as identify key divergences from multiple different nervous systems, including zebrafish, mice, non-human primates, and humans.

SeminarNeuroscience

Precision and Temporal Stability of Directionality Inferences from Group Iterative Multiple Model Estimation (GIMME) Brain Network Models

Alexander Weigard
University of Michigan
Mar 29, 2021

The Group Iterative Multiple Model Estimation (GIMME) framework has emerged as a promising method for characterizing connections between brain regions in functional neuroimaging data. Two of the most appealing features of this framework are its ability to estimate the directionality of connections between network nodes and its ability to determine whether those connections apply to everyone in a sample (group-level) or just to one person (individual-level). However, there are outstanding questions about the validity and stability of these estimates, including: 1) how recovery of connection directionality is affected by features of data sets such as scan length and autoregressive effects, which may be strong in some imaging modalities (resting state fMRI, fNIRS) but weaker in others (task fMRI); and 2) whether inferences about directionality at the group and individual levels are stable across time. This talk will provide an overview of the GIMME framework and describe relevant results from a large-scale simulation study that assesses directionality recovery under various conditions and a separate project that investigates the temporal stability of GIMME’s inferences in the Human Connectome Project data set. Analyses from these projects demonstrate that estimates of directionality are most precise when autoregressive and cross-lagged relations in the data are relatively strong, and that inferences about the directionality of group-level connections, specifically, appear to be stable across time. Implications of these findings for the interpretation of directional connectivity estimates in different types of neuroimaging data will be discussed.

SeminarNeuroscienceRecording

Untangling brain wide current flow using neural network models

Kanaka Rajan
Mount Sinai
Mar 11, 2021

Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this powerful framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

SeminarNeuroscienceRecording

Neural network models – analysis of their spontaneous activity and their response to single-neuron stimulation

Benjamin Lindner
Humboldt University Berlin
Feb 10, 2021
SeminarNeuroscienceRecording

Emergence of long time scales in data-driven network models of zebrafish activity

Remi Monasson
CNRS
Feb 9, 2021

How can neural networks exhibit persistent activity on time scales much larger than allowed by cellular properties? We address this question in the context of larval zebrafish, a model vertebrate that is accessible to brain-scale neuronal recording and high-throughput behavioral studies. We study in particular the dynamics of a bilaterally distributed circuit, the so-called ARTR, including hundreds neurons. ARTR exhibits slow antiphasic alternations between its left and right subpopulations, which can be modulated by the water temperature, and drive the coordinated orientation of swim bouts, thus organizing the fish spatial exploration. To elucidate the mechanism leading to the slow self-oscillation, we train a network graphical model (Ising) on neural recordings. Sampling the inferred model allows us to generate synthetic oscillatory activity, whose features correctly capture the observed dynamics. A mean-field analysis of the inferred model reveals the existence several phases; activated crossing of the barriers in between those phases controls the long time scales present in the network oscillations. We show in particular how the barrier heights and the nature of the phases vary with the water temperature.

SeminarNeuroscienceRecording

Cellular mechanisms behind stimulus evoked quenching of variability

Brent Doiron
University of Chicago
Jan 26, 2021

A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses. This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network, quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified. This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability. We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability. The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor. Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.

SeminarNeuroscienceRecording

Motor Cortex in Theory and Practice

Mark Churchland
Columbia University, New York
Nov 29, 2020

A central question in motor physiology has been whether motor cortex activity resembles muscle activity, and if not, why not? Over fifty years, extensive observations have failed to provide a concise answer, and the topic remains much debated. To provide a different perspective, we employed a novel behavioral paradigm that affords extensive comparison between time-evolving neural and muscle activity. Single motor-cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ’trajectory tangling’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of trajectory tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low tangling confers noise robustness. Remarkably, we were able to predict motor cortex activity from muscle activity alone, by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low tangling. Our results argue that motor cortex embeds descending commands in additional structure that ensure low tangling, and thus noise-robustness. The dominant structure in motor cortex may thus serve not a representational function (encoding specific variables) but a computational function: ensuring that outgoing commands can be generated reliably. Our results establish the utility of an emerging approach: understanding the structure of neural activity based on properties of population geometry that flow from normative principles such as noise robustness.

SeminarNeuroscienceRecording

Inferring brain-wide current flow using data-constrained neural network models

Kanaka Rajan
Icahn School of Medicine at Mount Sinai
Nov 17, 2020

Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we can ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

SeminarNeuroscienceRecording

Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Stanford University
Oct 15, 2020

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition.  However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling.  We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process.  In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscience

Towards multipurpose biophysics-based mathematical models of cortical circuits

Gaute Einevoll
Norwegian University of Life Sciences
Oct 13, 2020

Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.

SeminarNeuroscience

Neural and computational principles of the processing of dynamic faces and bodies

Martin Giese
University of Tübingen
Jul 7, 2020

Body motion is a fundamental signal of social communication. This includes facial as well as full-body movements. Combining advanced methods from computer animation with motion capture in humans and monkeys, we synthesized highly-realistic monkey avatar models. Our face avatar is perceived by monkeys as almost equivalent to a real animal, and does not induce an ‘uncanny valley effect’, unlike all other previously used avatar models in studies with monkeys. Applying machine-learning methods for the control of motion style, we were able to investigate how species-specific shape and dynamic cues influence the perception of human and monkey facial expressions. Human observers showed very fast learning of monkey expressions, and a perceptual encoding of expression dynamics that was largely independent of facial shape. This result is in line with the fact that facial shape evolved faster than the neuromuscular control in primate phylogenesis. At the same time, it challenges popular neural network models of the recognition of dynamic faces that assume a joint encoding of facial shape and dynamics. We propose an alternative physiologically-inspired neural model that realizes such an orthogonal encoding of facial shape and expression from video sequences. As second example, we investigated the perception of social interactions from abstract stimuli, similar to the ones by Heider & Simmel (1944), and also from more realistic stimuli. We developed and validated a new generative model for the synthesis of such social interaction, which is based on a modification of human navigation model. We demonstrate that the recognition of such stimuli, including the perception of agency, can be accounted for by a relatively elementary physiologically-inspired hierarchical neural recognition model, that does not require the assumption of sophisticated inference mechanisms, as postulated by some cognitive theories of social recognition. Summarizing, this suggests that essential phenomena in social cognition might be accounted for by a small set of simple neural principles that can be easily implemented by cortical circuits. The developed technologies for stimulus control form the basis of electrophysiological studies that can verify specific neural circuits, as the ones proposed by our theoretical models.

SeminarNeuroscienceRecording

Recurrent network models of adaptive and maladaptive learning

Kanaka Rajan
Icahn School of Medicine at Mount Sinai
Apr 7, 2020

During periods of persistent and inescapable stress, animals can switch from active to passive coping strategies to manage effort-expenditure. Such normally adaptive behavioural state transitions can become maladaptive in disorders such as depression. We developed a new class of multi-region recurrent neural network (RNN) models to infer brain-wide interactions driving such maladaptive behaviour. The models were trained to match experimental data across two levels simultaneously: brain-wide neural dynamics from 10-40,000 neurons and the realtime behaviour of the fish. Analysis of the trained RNN models revealed a specific change in inter-area connectivity between the habenula (Hb) and raphe nucleus during the transition into passivity. We then characterized the multi-region neural dynamics underlying this transition. Using the interaction weights derived from the RNN models, we calculated the input currents from different brain regions to each Hb neuron. We then computed neural manifolds spanning these input currents across all Hb neurons to define subspaces within the Hb activity that captured communication with each other brain region independently. At the onset of stress, there was an immediate response within the Hb/raphe subspace alone. However, RNN models identified no early or fast-timescale change in the strengths of interactions between these regions. As the animal lapsed into passivity, the responses within the Hb/raphe subspace decreased, accompanied by a concomitant change in the interactions between the raphe and Hb inferred from the RNN weights. This innovative combination of network modeling and neural dynamics analysis points to dual mechanisms with distinct timescales driving the behavioural state transition: early response to stress is mediated by reshaping the neural dynamics within a preserved network architecture, while long-term state changes correspond to altered connectivity between neural ensembles in distinct brain regions.

ePoster

Conditions for sequence replay in recurrent network models of CA3

Gaspar Cano, Richard Kempter

Bernstein Conference 2024

ePoster

Evolutionary algorithms support recurrent plasticity in spiking neural network models of neocortical task learning

Ivyer Qu, Huaze Liu, Jiayue Li, Yuqing Zhu

Bernstein Conference 2024

ePoster

Excitatory and inhibitory neurons exhibit distinct roles for task learning, temporal scaling, and working memory in recurrent spiking neural network models of neocortex.

Ulaş Ayyılmaz, Antara Krishnan, Yuqing Zhu

Bernstein Conference 2024

ePoster

Reverse engineering recurrent network models reveals mechanisms for location memory

Ian Hawes, Matt Nolan

Bernstein Conference 2024

ePoster

Fitting recurrent spiking network models to study the interaction between cortical areas

COSYNE 2022

ePoster

Fitting recurrent spiking network models to study the interaction between cortical areas

COSYNE 2022

ePoster

Identifying and adaptively perturbing compact deep neural network models of visual cortex

COSYNE 2022

ePoster

Identifying and adaptively perturbing compact deep neural network models of visual cortex

COSYNE 2022

ePoster

Automated identification of data-consistent spiking neural network models

Richard Gao, Michael Deistler, Jakob Macke

COSYNE 2023

ePoster

Homeostatic synaptic scaling optimizes learning in network models of neural population codes

Jonathan Mayzel & Elad Schneidman

COSYNE 2023

ePoster

A novel deep neural network models two streams of visual processing from retina to cortex

Minkyu Choi, Kuan Han, Xiaokai Wang, Zhongming Liu

COSYNE 2023

ePoster

Enhancing the causal predictive power in recurrent network models of neural dynamics

Jiayi Zhang, Tatiana Engel

COSYNE 2025

ePoster

Exploring biophysical and biochemical mechanisms of neuron-astrocyte network models

Tiina Manninen, Jugoslava Aćimović, Marja-Leena Linne

FENS Forum 2024

ePoster

Reverse engineering recurrent network models reveals mechanisms for location memory

Ian Hawes, Matthew Nolan

FENS Forum 2024

ePoster

Spiking neural network models of developmental frequency acceleration in the mouse prefrontal cortex

Gabriel Matias Lorenz, Sebastian Bitzenhofer, Mattia Chini, Pablo Martínez-Cañada, Ileana L. Hanganu-Opatz, Stefano Panzeri

FENS Forum 2024

ePoster

Weighted generative network models of neuronal development

Kayton Rotenberg, Danyal Akarca, Duncan Astle

FENS Forum 2024