Resources
Authors & Affiliations
Sacha Sokoloski,Ruben Coen-Cagli
Abstract
Understanding the neural code depends on capturing latent computational features within the highly complex dynamics of neural activity. While many features of neural dynamics will be relevant for a given computation, other features will not. In contrast with existing methods that describe how latent dynamics depend on experimental variables, we present a model designed to factor out variables that are irrelevant to the latent dynamics. We propose a fluctuating hidden Markov model (FHMM) that simultaneously learns an HMM that captures latent dynamics, and a baseline, dynamic firing rate that captures other drivers of the observed dynamics. We leverage the theory of exponential families to derive a nearly exact implementation of expectation maximization for FHMMs, and to increase model flexibility beyond Poisson spike-counts.
As a demonstration we apply FHMMs to study how the activity of single neurons in macaque primary visual cortex (V1) represent bimodal probability distributions. We first identified images that elicit bimodal response distributions across trials in target neurons (termed ambiguous images). We then studied the within-trial dynamics of responses to ambiguous images, and tested if the neural dynamics stay within a single mode during each individual stimulus presentation (slow sampling), or visit both modes within a presentation (fast sampling). We show that FHMMs learn to correctly factor out onset transients in neural activity, and capture the latent sampling dynamics between the two modes.
We are currently applying FHMMs to systematically assess the role and relative frequency of slow and fast sampling in V1 populations. More generally, our model can be extended to whole neural populations and continuous latent states, and should prove useful to any researchers trying to disentangle relevant computational features from task irrelevant variables or incidental neural dynamics.