ePoster

Maximizing memory capacity in heterogeneous networks

Kaining Zhang, Gaia Tavoni
Bernstein Conference 2024(2024)
Goethe University, Frankfurt, Germany

Conference

Bernstein Conference 2024

Goethe University, Frankfurt, Germany

Resources

Authors & Affiliations

Kaining Zhang, Gaia Tavoni

Abstract

A central question in systems neuroscience is what features of neural networks determine their memory capacity and whether these features are optimized in the brain. Here, we provide an analytical estimate of the memory capacity for a general class of network models. Our derivation extends previous theoretical results [1], which assumed homogeneous connectivity and coding levels (i.e., cell activation probabilities in the memory patterns), to models with arbitrary network architectures (i.e., different constraints on the arrangement of connections between cells) and heterogeneous coding levels across the neurons. For perceptrons, simple feedforward networks that learn to associate N-dimensional input patterns with binary output labels (or classes), we extend previous results for the capacity [1,2,3] ($\alpha_p:= C_p/N$, maximal number of pattern-label pairs that can be stored per input unit) to the general case of heterogeneous distributions of input states and arbitrary distributions of binary output labels. The capacity depends solely on the expected value of the output labels and does not depend on the input-state distribution: $$\alpha_p \left( m^{out} \right) = 2 \bigg\{ 1- m^{out} \int_{-M}^{M}{Dy}\bigg\}^{-1} \;\; , \quad \frac{1+m^{out}}{2} \int_{M}^{\infty}{ Dy \ (y-M) } = \frac{1-m^{out}}{2} \int_{-M}^{\infty}{ Dy \ (y+M) }$$ We then derive the capacity of Hopfield models with arbitrary network architectures and heterogeneous coding levels. Each unit in a Hopfield network can be mapped to the output of one perceptron and the units projecting to it can be mapped to the perceptron inputs. This mapping defines N independent perceptrons and allows us to compute the capacity of a Hopfield model as the smallest of the capacities of the corresponding N perceptrons: $$\alpha_{H} = \frac{1}{N} \min_{i} \big(C_p(m_i,N_i) \big) = \min_{i} \bigg(\frac{N_i}{N}*\alpha_p(2p_i-1) \bigg)$$ We use this analytical result to estimate the capacity of brain-inspired networks. We show that (a) in a two-layer model simulating the hippocampal-cortical system, coding levels and network weights that are consistent with the formation of memory ``indices'' in the hippocampus - as posited by the hippocampal index theory – support joint maximization of memory and information capacity (Fig. B-D); (b) in heterogeneous networks, the number of input connections (in-degree) and the coding level of each neuron must be correlated to maximize memory capacity (Fig. E-G).

Unique ID: bernstein-24/maximizing-memory-capacity-heterogeneous-a8e580f4