← Back

Causal Inference

Topic spotlight
TopicNeuro

causal inference

Discover seminars, jobs, and research tagged with causal inference across Neuro.
9 curated items9 Seminars
Updated about 2 years ago
9 items · causal inference

Latest

9 results
SeminarNeuroscienceRecording

Multisensory perception, learning, and memory

Ladan Shams
UCLA
Dec 7, 2023

Note the later start time!

SeminarNeuroscienceRecording

Visual-vestibular cue comparison for perception of environmental stationarity

Paul MacNeilage
University of Nevada, Reno
Oct 26, 2023

Note the later time!

SeminarNeuroscienceRecording

Network inference via process motifs for lagged correlation in linear stochastic processes

Alice Schwarze
Dartmouth College
Nov 18, 2022

A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.

SeminarNeuroscienceRecording

Learning static and dynamic mappings with local self-supervised plasticity

Pantelis Vafeidis
California Institute of Technology
Sep 7, 2022

Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.

SeminarNeuroscienceRecording

NMC4 Short Talk: Neurocomputational mechanisms of causal inference during multisensory processing in the macaque brain

Guangyao Qi
Institute of Neuroscience, Chinese Academy of Sciences
Dec 2, 2021

Natural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.

SeminarNeuroscienceRecording

Conflict in Multisensory Perception

Salvador Soto.Faraco
Universitat Pompeu Fabra
Nov 11, 2021

Multisensory perception is often studied through the effects of inter-sensory conflict, such as in the McGurk effect, the Ventriloquist illusion, and the Rubber Hand Illusion. Moreover, Bayesian approaches to cue fusion and causal inference overwhelmingly draw on cross-modal conflict to measure and to model multisensory perception. Given the prevalence of conflict, it is remarkable that accounts of multisensory perception have so far neglected the theory of conflict monitoring and cognitive control, established about twenty years ago. I hope to make a case for the role of conflict monitoring and resolution during multisensory perception. To this end, I will present EEG and fMRI data showing that cross-modal conflict in speech, resulting in either integration or segregation, triggers neural mechanisms of conflict detection and resolution. I will also present data supporting a role of these mechanisms during perceptual conflict in general, using Binocular Rivalry, surrealistic imagery, and cinema. Based on this preliminary evidence, I will argue that it is worth considering the potential role of conflict in multisensory perception and its incorporation in a causal inference framework. Finally, I will raise some potential problems associated with this proposal.

SeminarNeuroscienceRecording

The neural dynamics of causal Inference across the cortical hierarchy

Uta Noppeney
Donders Institute for Brain, Cognition and Behaviour
May 27, 2021
SeminarNeuroscienceRecording

How multisensory perception is shaped by causal inference and serial effects

Christoph Kayser
Bielefeld University
Apr 22, 2021
SeminarNeuroscience

Multisensory Perception: Behaviour, Computations and Neural Mechanisms

Uta Noppeney
Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
Jan 18, 2021

Our senses are constantly bombarded with a myriad of diverse signals. Transforming this sensory cacophony into a coherent percept of our environment relies on solving two computational challenges: First, we need to solve the causal inference problem - deciding whether signals come from a common cause and thus should be integrated, or come from different sources and be treated independently. Second, when there is a common cause, we should integrate signals across the senses weighted in proportion to their sensory reliabilities. I discuss recent research at the behavioural, computational and neural systems level that investigates how the brain addresses these two computational challenges in multisensory perception.

causal inference coverage

9 items

Seminar9
Domain spotlight

Explore how causal inference research is advancing inside Neuro.

Visit domain