Resources
Authors & Affiliations
Anoop Praturu, Tatyana Sharpee
Abstract
Information theory provides a powerful theoretical framework for a quantitative understanding of the structure and function of neural circuits. Despite this, the application of these ideas to neuroscience and machine learning has been hampered by the fact that the computation and maximization of mutual information is only analytically tractable for simple coding problems, under the assumption of small network sizes and low dimensional stimuli. Using tools from the theory of spin-glasses we present a novel approach to estimating the entropy of large neural populations in response to arbitrary stimuli. We derive a variational lower bound on the entropy which saturates when entropy is maximized and can be optimized via simple gradient descent techniques. We show that this bound can be used to implicitly maximize the mutual information between inputs and outputs in a network and apply these formulas to derive maximally informative neural representations of data. We show that maximally informative representations are able to reconstruct stimuli via simple population vectors with far superior accuracy and interpretability than back-propagation neural networks.