Resources
Authors & Affiliations
Junji Ito, Jonas Oberste-Frielinghaus, Anno Kurth, Sonja Grün
Abstract
Synfire chains have been postulated as a model for stable propagation of synchronous spikes through the cortical networks [1,2,3]. Synfire-chain-like activity can also be found in spiking artificial neural networks trained for a classification task [4]. Understanding the mechanism for generating such activity would provide better insights into the functioning of real brains and artificial neural networks. Here we consider an analytically tractable network of binary units to study the conditions for the emergence of synchronous spikes and their stable propagation.
Our network is organized in layers of $N$ threshold units, each taking a state $x\in\{0,1\}$ depending on its input $I$ as $x=H(I-\theta)$ ($H$: Heaviside step function, $\theta$: threshold). The connections from layer $l$ to $l+1$ are represented by a matrix $W^l$, whose elements are Gaussian IID random variables with mean 0 and variance $1/N$. States of all units are initially set to 0. Then a fraction $P^1$ of layer 1 units are activated (their states set to 1) at different timings. We interpret the state change of a unit as a spike generation by that unit. The spikes generated in layer $l$ are propagated to layer $l+1$ through the matrix $W^l$, providing time-varying inputs to activate layer $l+1$ units and generate their spikes.
Based on the formalism laid out in [5], we derive a relation between the fraction $p^l(t)$ and $p^{l+1}(t)$ of active units at time $t$ in layer $l$ and $l+1$, respectively, as $p^{l+1}(t)=\mathrm{erfc}\big(\theta/\sqrt{2p^l(t)}\big)/2$ (Eq. 1). Iteratively applying this relation results in the activity converging either to $p^\infty(t)=0$ or to $p^\infty(t)=p_s$, depending on whether $p^1(t)\geq p_u$ or $p^1(t)