← Back

Experimental Data

Topic spotlight
TopicWorld Wide

experimental data

Discover seminars, jobs, and research tagged with experimental data across World Wide.
36 curated items31 Seminars3 Positions2 ePosters
Updated 1 day ago
36 items · experimental data
36 results
Position

Axel Hutt

INRIA
Strasbourg, France
Dec 5, 2025

The new research team NECTARINE at INRIA in Strasbourg / France aims to create a synergy between clinicians and mathematical researchers to develop new healthcare technologies. The team works on stochastic microscopic network models to describe macroscopic experimental data, such as behavior and/or encephalographic. They collaborate closely with clinicians and choose their research focus along the clinical applications. Major scientific objectives are stochastic multi-scale simulations and mean-field descriptions of neural activity on the macroscopic scale. Moreover, merging experimental data and numerical models by machine learning techniques is an additional objective. The team's clinical research focuses on neuromodulation of patients suffering from deficits in attention and temporal prediction. The team offers the possibility to apply for a permanent position as Chargé de Recherche (CR) or Directeur de Recherche (DR) in the research field of mathematical neuroscience with a strong focus on stochastic dynamics linking brain network modelling with experimental data.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 21, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 20, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 19, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 18, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 17, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 14, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 13, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 12, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 11, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany

Marco Bertamini, David Brainard, Peter Dayan, Andrea van Doorn, Roland Fleming, Pascal Fries, Wilson S Geisler, Robbe Goris, Sheng He, Tadashi Isa, Tomas Knapen, Jan Koenderink, Larry Maloney, Keith May, Marcello Rosa, Jonathan Victor
Aug 10, 2025

Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.

SeminarNeuroscience

Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task

Albert Compte
IDIBAPS
Nov 16, 2023

Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.

SeminarNeuroscience

Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time

Harel Shouval
The University of Texas at Houston
Jun 13, 2023

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.

SeminarNeuroscience

Quasicriticality and the quest for a framework of neuronal dynamics

Leandro Jonathan Fosque
Beggs lab, IU Bloomington
May 2, 2023

Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.

SeminarNeuroscienceRecording

Network mechanisms underlying representational drift in area CA1 of hippocampus

Alex Roxin
CRM, Barcelona
Feb 1, 2022

Recent chronic imaging experiments in mice have revealed that the hippocampal code exhibits non-trivial turnover dynamics over long time scales. Specifically, the subset of cells which are active on any given session in a familiar environment changes over the course of days and weeks. While some cells transition into or out of the code after a few sessions, others are stable over the entire experiment. The mechanisms underlying this turnover are unknown. Here we show that the statistics of turnover are consistent with a model in which non-spatial inputs to CA1 pyramidal cells readily undergo plasticity, while spatially tuned inputs are largely stable over time. The heterogeneity in stability across the cell assembly, as well as the decrease in correlation of the population vector of activity over time, are both quantitatively fit by a simple model with Gaussian input statistics. In fact, such input statistics emerge naturally in a network of spiking neurons operating in the fluctuation-driven regime. This correspondence allows one to map the parameters of a large-scale spiking network model of CA1 onto the simple statistical model, and thereby fit the experimental data quantitatively. Importantly, we show that the observed drift is entirely consistent with random, ongoing synaptic turnover. This synaptic turnover is, in turn, consistent with Hebbian plasticity related to continuous learning in a fast memory system.

SeminarNeuroscience

Integrated Information Theory and Its Implications for Free Will

Giulio Tononi
University of Wisconsin-Madison
Jun 24, 2021

Integrated information theory (IIT) takes as its starting point phenomenology, rather than behavioral, functional, or neural correlates of consciousness. The theory characterizes the essential properties of phenomenal existence—which is immediate and indubitable. These are translated into physical properties, expressed operationally as cause-effect power, which must be satisfied by the neural substrate of consciousness. On this basis, the theory can account for clinical and experimental data about the presence and absence of consciousness. Current work aims at accounting for specific qualities of different experiences, such as spatial extendedness and the flow of time. Several implications of IIT have ethical relevance. One is that functional equivalence does not imply phenomenal equivalence—computers may one day be able to do everything we do, but they will not experience anything. Another is that we do have free will in the fundamental, metaphysical sense—we have true alternatives and we, not our neurons, are the true cause of our willed actions.

SeminarNeuroscience

Bayesian distributional regression models for cognitive science

Paul Bürkner
University of Stuttgart
May 25, 2021

The assumed data generating models (response distributions) of experimental or observational data in cognitive science have become increasingly complex over the past decades. This trend follows a revolution in model estimation methods and a drastic increase in computing power available to researchers. Today, higher-level cognitive functions can well be captured by and understood through computational cognitive models, a common example being drift diffusion models for decision processes. Such models are often expressed as the combination of two modeling layers. The first layer is the response distribution with corresponding distributional parameters tailored to the cognitive process under investigation. The second layer are latent models of the distributional parameters that capture how those parameters vary as a function of design, stimulus, or person characteristics, often in an additive manner. Such cognitive models can thus be understood as special cases of distributional regression models where multiple distributional parameters, rather than just a single centrality parameter, are predicted by additive models. Because of their complexity, distributional models are quite complicated to estimate, but recent advances in Bayesian estimation methods and corresponding software make them increasingly more feasible. In this talk, I will speak about the specification, estimation, and post-processing of Bayesian distributional regression models and how they can help to better understand cognitive processes.

SeminarPhysics of LifeRecording

Energy landscapes, order and disorder, and protein sequence coevolution: From proteins to chromosome structure

Jose Onuchic
Rice University
May 13, 2021

In vivo, the human genome folds into a characteristic ensemble of 3D structures. The mechanism driving the folding process remains unknown. A theoretical model for chromatin (the minimal chromatin model) explains the folding of interphase chromosomes and generates chromosome conformations consistent with experimental data is presented. The energy landscape of the model was derived by using the maximum entropy principle and relies on two experimentally derived inputs: a classification of loci into chromatin types and a catalog of the positions of chromatin loops. This model was generalized by utilizing a neural network to infer these chromatin types using epigenetic marks present at a locus, as assayed by ChIP-Seq. The ensemble of structures resulting from these simulations completely agree with HI-C data and exhibits unknotted chromosomes, phase separation of chromatin types, and a tendency for open chromatin to lie at the periphery of chromosome territories. Although this theoretical methodology was trained in one cell line, the human GM12878 lymphoblastoid cells, it has successfully predicted the structural ensembles of multiple human cell lines. Finally, going beyond Hi-C, our predicted structures are also consistent with microscopy measurements. Analysis of both structures from simulation and microscopy reveals that short segments of chromatin make two-state transitions between closed conformations and open dumbbell conformations. For gene active segments, the vast majority of genes appear clustered in the linker region of the chromatin segment, allowing us to speculate possible mechanisms by which chromatin structure and dynamics may be involved in controlling gene expression. * Supported by the NSF

SeminarNeuroscienceRecording

How Brain Circuits Function in Health and Disease: Understanding Brain-wide Current Flow

Kanaka Rajan
Icahn School of Medicine at Mount Sinai, New York
Apr 13, 2021

Dr. Rajan and her lab design neural network models based on experimental data, and reverse-engineer them to figure out how brain circuits function in health and disease. They recently developed a powerful framework for tracing neural paths across multiple brain regions— called Current-Based Decomposition (CURBD). This new approach enables the computation of excitatory and inhibitory input currents that drive a given neuron, aiding in the discovery of how entire populations of neurons behave across multiple interacting brain regions. Dr. Rajan’s team has applied this method to studying the neural underpinnings of behavior. As an example, when CURBD was applied to data gathered from an animal model often used to study depression- and anxiety-like behaviors (i.e., learned helplessness) the underlying biology driving adaptive and maladaptive behaviors in the face of stress was revealed. With this framework Dr. Rajan's team probes for mechanisms at work across brain regions that support both healthy and disease states-- as well as identify key divergences from multiple different nervous systems, including zebrafish, mice, non-human primates, and humans.

SeminarNeuroscienceRecording

Untangling brain wide current flow using neural network models

Kanaka Rajan
Mount Sinai
Mar 11, 2021

Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this powerful framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

SeminarNeuroscienceRecording

Distinct synaptic plasticity mechanisms determine the diversity of cortical responses during behavior

Michele Insanally
University of Pittsburgh School of Medicine
Jan 14, 2021

Spike trains recorded from the cortex of behaving animals can be complex, highly variable from trial to trial, and therefore challenging to interpret. A fraction of cells exhibit trial-averaged responses with obvious task-related features such as pure tone frequency tuning in auditory cortex. However, a substantial number of cells (including cells in primary sensory cortex) do not appear to fire in a task-related manner and are often neglected from analysis. We recently used a novel single-trial, spike-timing-based analysis to show that both classically responsive and non-classically responsive cortical neurons contain significant information about sensory stimuli and behavioral decisions suggesting that non-classically responsive cells may play an underappreciated role in perception and behavior. We now expand this investigation to explore the synaptic origins and potential contribution of these cells to network function. To do so, we trained a novel spiking recurrent neural network model that incorporates spike-timing-dependent plasticity (STDP) mechanisms to perform the same task as behaving animals. By leveraging excitatory and inhibitory plasticity rules this model reproduces neurons with response profiles that are consistent with previously published experimental data, including classically responsive and non-classically responsive neurons. We found that both classically responsive and non-classically responsive neurons encode behavioral variables in their spike times as seen in vivo. Interestingly, plasticity in excitatory-to-excitatory synapses increased the proportion of non-classically responsive neurons and may play a significant role in determining response profiles. Finally, our model also makes predictions about the synaptic origins of classically and non-classically responsive neurons which we can compare to in vivo whole-cell recordings taken from the auditory cortex of behaving animals. This approach successfully recapitulates heterogeneous response profiles measured from behaving animals and provides a powerful lens for exploring large-scale neuronal dynamics and the plasticity rules that shape them.

SeminarNeuroscience

Feedforward and feedback computations in the olfactory bulb and olfactory cortex: computational model and experimental data

Zhaoping Li
Max Planck Institute of Biological Cybernetics, Tübingen, germany
Dec 6, 2020
SeminarNeuroscienceRecording

Inferring brain-wide current flow using data-constrained neural network models

Kanaka Rajan
Icahn School of Medicine at Mount Sinai
Nov 17, 2020

Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we can ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

SeminarPhysics of LifeRecording

Biology is “messy”. So how can we take theory in biology seriously and plot predictions and experiments on the same axes?

Workshop, Multiple Speakers
Emory University
Sep 23, 2020

Many of us came to biology from physics. There we have been trained on such classic examples as muon g-2, where experimental data and theoretical predictions agree to many significant digits. Now, working in biology, we routinely hear that it is messy, most details matter, and that the best hope for theory in biology is to be semi-qualitative, predict general trends, and to forgo the hope of ever making quantitative predictions with the precision that we are used to in physics. Colloquially, we should be satisfied even if data and models differ so much that plotting them on the same plot makes little sense. However, some of us won’t be satisfied by this. So can we take theory in biology seriously and predict experimental outcomes within (small) error bars? Certainly, we won’t be able to predict everything, but this is never required, even in traditional physics. But we should be able to choose some features of data that are nontrivial and interesting, and focus on them. We also should be able to find different classes of models --- maybe even null models --- that match biology better, and thus allow for a better agreement. It is even possible that large-dimensional datasets of modern high-throughput experiments, and the ensuing “more is different” statistical physics style models will make quantitative, precise theory easier. To explore the role of quantitative theory in biology, in this workshop, eight speakers will address some of the following general questions based on their specific work in different corners of biology: Which features of biological data are predictable? Which types of models are best suited to making quantitative predictions in different fields? Should theorists interested in quantitative predictions focus on different questions, not typically asked by biologists? Do large, multidimensional datasets make theories (and which theories?) more or less likely to succeed? This will be an unapologetically theoretical physics workshop — we won’t focus on a specific subfield of biology, but will explore these questions across the fields, hoping that the underlying theoretical frameworks will help us find the missing connections.

SeminarNeuroscience

Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus

Andrea Navas-Olive
Instituto Cajal CSIC
Jul 18, 2020

The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates. " "The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown." "Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.

SeminarPhysics of LifeRecording

Can machine learning learn new physics, or do we need to put it in by hand?"\

Workshop, Multiple Speakers
Emory University
Jun 3, 2020

There has been a surge of publications on using machine learning (ML) on experimental data from physical systems: social, biological, statistical, and quantum. However, can these methods discover fundamentally new physics? It can be that their biggest impact is in better data preprocessing, while inferring new physics is unrealistic without specifically adapting the learning machine to find what we are looking for — that is, without the “intuition” — and hence without having a good a priori guess about what we will find. Is machine learning a useful tool for physics discovery? Which minimal knowledge should we endow the machines with to make them useful in such tasks? How do we do this? Eight speakers below will anchor the workshop, exploring these questions in contexts of diverse systems (from quantum to biological), and from general theoretical advances to specific applications. Each speaker will deliver a 10 min talk with another 10 minutes set aside for moderated questions/discussion. We expect the talks to be broad, bold, and provocative, discussing where the field is heading, and what is needed to get us there.

SeminarNeuroscienceRecording

Recurrent network models of adaptive and maladaptive learning

Kanaka Rajan
Icahn School of Medicine at Mount Sinai
Apr 7, 2020

During periods of persistent and inescapable stress, animals can switch from active to passive coping strategies to manage effort-expenditure. Such normally adaptive behavioural state transitions can become maladaptive in disorders such as depression. We developed a new class of multi-region recurrent neural network (RNN) models to infer brain-wide interactions driving such maladaptive behaviour. The models were trained to match experimental data across two levels simultaneously: brain-wide neural dynamics from 10-40,000 neurons and the realtime behaviour of the fish. Analysis of the trained RNN models revealed a specific change in inter-area connectivity between the habenula (Hb) and raphe nucleus during the transition into passivity. We then characterized the multi-region neural dynamics underlying this transition. Using the interaction weights derived from the RNN models, we calculated the input currents from different brain regions to each Hb neuron. We then computed neural manifolds spanning these input currents across all Hb neurons to define subspaces within the Hb activity that captured communication with each other brain region independently. At the onset of stress, there was an immediate response within the Hb/raphe subspace alone. However, RNN models identified no early or fast-timescale change in the strengths of interactions between these regions. As the animal lapsed into passivity, the responses within the Hb/raphe subspace decreased, accompanied by a concomitant change in the interactions between the raphe and Hb inferred from the RNN weights. This innovative combination of network modeling and neural dynamics analysis points to dual mechanisms with distinct timescales driving the behavioural state transition: early response to stress is mediated by reshaping the neural dynamics within a preserved network architecture, while long-term state changes correspond to altered connectivity between neural ensembles in distinct brain regions.

ePoster

Augmented Gaussian process variational autoencoders for multi-modal experimental data

Rabia Gondur, Evan Schaffer, Mikio Aoi, Stephen Keeley

COSYNE 2023

ePoster

Neuronal travelling waves explain rotational dynamics in experimental datasets and modelling

Yekaterina Kuzmina, Dmitrii Kriukov, Mikhail Lebedev

FENS Forum 2024