← Back

Stochastic

Topic spotlight
TopicWorld Wide

stochastic

Discover seminars, jobs, and research tagged with stochastic across World Wide.
32 curated items17 ePosters14 Seminars1 Position
Updated 1 day ago
32 items · stochastic
32 results
PositionComputational Neuroscience

Dr Cian O'Donnell

Ulster University
Derry, Northern Ireland, UK
Dec 5, 2025

Build stochastic computational/mathematical models of gene expression in dendritic neurons to try to understand how or if synapses can store information stably. Perform data analysis on longitudinal in vivo and in vitro imaging and electron microscopy data of synapse dynamics to compare to the models.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 28, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Reese Kneeland
Jan 4, 2024

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705

SeminarNeuroscienceRecording

Network inference via process motifs for lagged correlation in linear stochastic processes

Alice Schwarze
Dartmouth College
Nov 16, 2022

A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscienceRecording

NMC4 Short Talk: Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony

Scott Rich
Kremibl Brain Institute
Nov 30, 2021

A myriad of pathological changes associated with epilepsy, including the loss of specific cell types, improper expression of individual ion channels, and synaptic sprouting, can be recast as decreases in cell and circuit heterogeneity. In recent experimental work, we demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells, and past theoretical work has shown that neuronal heterogeneity improves a neural circuit’s ability to encode information. Viewed alongside the fact that seizure is an information-poor brain state, these findings motivate the hypothesis that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to seizure onset. By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We directly implement experimentally-obtained heterogeneity levels in cortical excitatory-inhibitory (E-I) stochastic spiking network models. Low heterogeneity networks display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis. Mean-field analysis reveals a distinct mathematical structure in these networks distinguished by multi-stability. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input-output response functions explains the counter-intuitive experimentally observed reduction in single-cell excitability in epileptogenic neurons. This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in mathematically characterized effects of heterogeneity. These interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure and a new lens through which to view epilepsy, the most common serious neurological disorder in the world, that could reveal new targets for clinical treatment.

SeminarNeuroscienceRecording

On the implicit bias of SGD in deep learning

Amir Globerson
Tel Aviv University
Oct 19, 2021

Tali's work emphasized the tradeoff between compression and information preservation. In this talk I will explore this theme in the context of deep learning. Artificial neural networks have recently revolutionized the field of machine learning. However, we still do not have sufficient theoretical understanding of how such models can be successfully learned. Two specific questions in this context are: how can neural nets be learned despite the non-convexity of the learning problem, and how can they generalize well despite often having more parameters than training data. I will describe our recent work showing that gradient-descent optimization indeed leads to 'simpler' models, where simplicity is captured by lower weight norm and in some cases clustering of weight vectors. We demonstrate this for several teacher and student architectures, including learning linear teachers with ReLU networks, learning boolean functions and learning convolutional pattern detection architectures.

SeminarPhysics of Life

Molecular mechanisms to overcome stochasticity in endosomal networks

Senthil Arumugam
Monash University
Jan 21, 2021
SeminarPhysics of LifeRecording

Building a synthetic cell: Understanding the clock design and function

Qiong Yang
U Michigan - Ann Arbor
Oct 19, 2020

Clock networks containing the same central architectures may vary drastically in their potential to oscillate, raising the question of what controls robustness, one of the essential functions of an oscillator. We computationally generate an atlas of oscillators and found that, while core topologies are critical for oscillations, local structures substantially modulate the degree of robustness. Strikingly, two local structures, incoherent and coherent inputs, can modify a core topology to promote and attenuate its robustness, additively. The findings underscore the importance of local modifications to the performance of the whole network. It may explain why auxiliary structures not required for oscillations are evolutionary conserved. We also extend this computational framework to search hidden network motifs for other clock functions, such as tunability that relates to the capabilities of a clock to adjust timing to external cues. Experimentally, we developed an artificial cell system in water-in-oil microemulsions, within which we reconstitute mitotic cell cycles that can perform self-sustained oscillations for 30 to 40 cycles over multiple days. The oscillation profiles, such as period, amplitude, and shape, can be quantitatively varied with the concentrations of clock regulators, energy levels, droplet sizes, and circuit design. Such innate flexibility makes it crucial to studying clock functions of tunability and stochasticity at the single-cell level. Combined with a pressure-driven multi-channel tuning setup and long-term time-lapse fluorescence microscopy, this system enables a high-throughput exploration in multi-dimension continuous parameter space and single-cell analysis of the clock dynamics and functions. We integrate this experimental platform with mathematical modeling to elucidate the topology-function relation of biological clocks. With FRET and optogenetics, we also investigate spatiotemporal cell-cycle dynamics in both homogeneous and heterogeneous microenvironments by reconstructing subcellular compartments.

SeminarPhysics of Life

Measuring transcription at a single gene copy reveals hidden drivers of bacterial individuality

Ido Golding
UIUC - Urbana-Champaign IL – USA
Jul 28, 2020

Single-cell measurements of mRNA copy numbers inform our understanding of stochastic gene expression, but these measurements coarse-grain over the individual copies of the gene, where transcription and its regulation take place stochastically. We recently combined single-molecule quantification of mRNA and gene loci to measure the transcriptional activity of an endogenous gene in individual Escherichia coli bacteria. When interpreted using a theoretical model for mRNA dynamics, the single-cell data allowed us to obtain the probabilistic rates of promoter switching, transcription initiation and elongation, mRNA release and degradation. Unexpectedly, we found that gene activity can be strongly coupled to the transcriptional state of another copy of the same gene present in the cell, and to the event of gene replication during the bacterial cell cycle. These gene-copy and cell-cycle correlations demonstrate the limits of mapping whole-cell mRNA numbers to the underlying stochastic gene activity and highlight the contribution of previously hidden variables to the observed population heterogeneity.

SeminarNeuroscienceRecording

Neural Stem Cell Lineage Progression in Developing Cerebral Cortex

Simon Hippenmeyer
Institute of Science and Technology, Austria
Jun 14, 2020

The concerted production of the correct number and diversity of neurons and glia by neural stem cells is essential for intricate neural circuit assembly. In the developing cerebral cortex, radial glia progenitors (RGPs) are responsible for producing all neocortical neurons and certain glia lineages. We recently performed a clonal analysis by exploiting the genetic MADM (Mosaic Analysis with Double Markers) technology and discovered a high degree of non-stochasticity and thus deterministic mode of RGP behaviour. However, the cellular and molecular mechanisms controlling RGP lineage progression remain unknown. To this end we use quantitative MADM-based genetic paradigms at single cell resolution to define the cell-autonomous functions of signaling pathways controlling cortical neuron/glia genesis and postnatal stem cell behaviour in health and disease. Here I will outline our current understanding of the mechanistic framework instructing neural stem cell lineage progression and discuss new data about the role of genomic imprinting – an epigenetic phenomenon - in cortical development.

SeminarNeuroscienceRecording

Inferring Brain Rhythm Circuitry and Burstiness

Andre Longtin
University of Ottawa
Apr 14, 2020

Bursts in gamma and other frequency ranges are thought to contribute to the efficiency of working memory or communication tasks. Abnormalities in bursts have also been associated with motor and psychiatric disorders. The determinants of burst generation are not known, specifically how single cell and connectivity parameters influence burst statistics and the corresponding brain states. We first present a generic mathematical model for burst generation in an excitatory-inhibitory (EI) network with self-couplings. The resulting equations for the stochastic phase and envelope of the rhythm’s fluctuations are shown to depend on only two meta-parameters that combine all the network parameters. They allow us to identify different regimes of amplitude excursions, and to highlight the supportive role that network finite-size effects and noisy inputs to the EI network can have. We discuss how burst attributes, such as their durations and peak frequency content, depend on the network parameters. In practice, the problem above follows the a priori challenge of fitting such E-I spiking networks to single neuron or population data. Thus, the second part of the talk will discuss a novel method to fit mesoscale dynamics using single neuron data along with a low-dimensional, and hence statistically tractable, single neuron model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous ‘pools’ of neurons, and modelling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to either optimize parameters by gradient ascent on the log-likelihood, or to perform Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. We illustrate this approach using an E-I network of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived. We show that both single-neuron and connectivity parameters can be adequately recovered from simulated data.

ePoster

Bayesian inference and arousal modulation in spatial perception to mitigate stochasticity and volatility

David Meijer, Fabian Dorok, Roberto Barumerli, Burcu Bayram, Michelle Spierings, Ulrich Pomper, Robert Baumgartner

Bernstein Conference 2024

ePoster

Inferring stochastic low-rank recurrent neural networks from neural data

Matthijs Pals, A Sağtekin, Felix Pei, Manuel Gloeckler, Jakob Macke

Bernstein Conference 2024

ePoster

Mean Field Analysis of a Stochastic STDP model

Pascal Helson, Etienne Tanré, Romain Veltz

Bernstein Conference 2024

ePoster

Stochastic Process Model derived indicators of overfitting for deep architectures: Applicability to small sample recalibration of sEMG decoders

Stephan Lehmler, Muhammad Saif-Ur-Rehman, Ioannis Iossifidis

Bernstein Conference 2024

ePoster

Stochastic phase reduction for brain oscillations

Pierre Houzelstein, Boris Gutkin, Alberto Pérez-Cervera

Bernstein Conference 2024

ePoster

Bayesian Inference in High-Dimensional Time-Series with the Orthogonal Stochastic Linear Mixing Model

COSYNE 2022

ePoster

Fine-tuning hierarchical circuits through learned stochastic co-modulation

COSYNE 2022

ePoster

Fine-tuning hierarchical circuits through learned stochastic co-modulation

COSYNE 2022

ePoster

Reduced stochastic models reveal the mechanisms underlying drifting cell assemblies

COSYNE 2022

ePoster

Reduced stochastic models reveal the mechanisms underlying drifting cell assemblies

COSYNE 2022

ePoster

Dynamics-neutral growth of stochastic stabilized supralinear networks

Puria Radmard, Wayne WM Soo, Máté Lengyel

COSYNE 2023

ePoster

Representational dissimilarity metric spaces for stochastic neural networks

Jingyang Zhou, Lyndon Duong, Josue Nassar, Jules Berman, Jeroen Olieslagers, Alex Williams

COSYNE 2023

ePoster

Spiking and bursting activity in stochastic recurrent networks

Audrey Teasley & Gabriel Ocker

COSYNE 2023

ePoster

Contextual Feature Selection with Conditional Stochastic Gates

Ram Dyuthi Sristi, Ofir Lindenbaum, Shira Lifshitz, Maria Lavzin, Jackie Schiller, Gal Mishne, Hadas Benisty

COSYNE 2025

ePoster

Inferring stochastic low-rank recurrent neural networks from neural data

Matthijs Pals, A Erdem Sagtekin, Felix Pei, Manuel Gloeckler, Florian Mormann, Stefanie Liebe, Jakob Macke

COSYNE 2025

ePoster

Understanding stochastic decision-making in competitive multi-agent environments

Joanna Aloor, Oliver Gauld, Joseph Warren, Matthew Mower, Olga Mavromati, Chunyu A. Duan

COSYNE 2025

ePoster

Stochastic model for the optimal fusion of social and sensory information in transparent interactions

Selma Kouaiche, Fred Wolf, Matthias Haering

FENS Forum 2024