ePoster

Computing mutual-information rates by maximum-entropy-inspired models

Tobias Kühn, Gabriel Mahuas, Ulisse Ferrari
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

Tobias Kühn, Gabriel Mahuas, Ulisse Ferrari

Abstract

Information in sensory neurons is conveyed by spiking activity varying in time. This is quantified by the mutual-information rate (MIR), given by $\mathrm{MIR}:=\underset{\Delta t\rightarrow\infty}{\lim}\frac{{\cal I}\left(\Delta t\right)}{\Delta t}$, where ${\cal I}\left(\Delta t\right)$ is the mutual information between the activity of a spiking neuron and a stimulus over a given time window $\Delta t$. To compute it for spiking data, the spike train is typically discretized with a certain bin-size $\mathrm{dt}$. Because $\Delta t/\mathrm{dt}$ is typically well bigger than $1$, there are many time bins with correlated activity to take into account. This makes the computation of the MIR challenging, as it requires the estimation of entropies, which is difficult in the realm of correlated, poorly sampled data, for which estimates are prone to biases. In our work, we introduce a method inspired by maximum-entropy modeling to compute the $\mathrm{MIR}$. We test it on artificial data from a generalized linear model mimicking the activity of retinal ganglion cells and demonstrate that we well approximate the exact result in the well-sampled regime. Importantly, our method introduces only a limited bias even in case of a number of samples attainable in experiments, about 80 to 100. Applying it to data from ex-vivo electrophysiological recordings from rat retinal-ganglion cells, stimulated by black-and-white checkerboards, we obtain information rates of about 5 to 12 nats/s for every neuron (cf. panel d). On the technical side, we investigate the dependence of the $\mathrm{MIR}$ on the bin size $\mathrm{dt}$, whose choice should correspond to the noise auto-correlations (cf. panel b), which, in turn, are related to the neuron's refractory period. As visible in panel d, the estimate for $\mathrm{MIR}$ saturates somewhat above the typical value of the refractory period (compare in particular the inset where we plot $\mathrm{MIR}$ in dependence on the number of time bins $\Delta t/\mathrm{dt}$). Within our framework, it is possible to choose relatively large time bins without losing information because, contrary to convential maximum-entropy approaches in neuroscience, we take into account integer spike counts, instead of only binary ones. In particular, we believe that this feature will make it easy to extend our technique to populations of neurons, for which more time scales have to be adapted.

Unique ID: bernstein-24/computing-mutual-information-rates-611ed9d1