ePosterDOI Available
What should a neuron aim for? Designing local objective functions based on information theory
Andreas Schneiderand 6 co-authors
Bernstein Conference 2024 (2024)
Goethe University, Frankfurt, Germany
Presentation
Date TBA
Event Information
Poster
View posterAbstract
In modern deep neural networks, the intricate learning dynamics of individual neurons often remain obscured due to global optimization methods. This contrasts with biological neural networks, which achieve robustness and efficiency through self-organized, local learning processes using limited global information. To bridge this gap, a local learning framework inspired by the self-organizing principles of biological neurons has been proposed [1]. This framework allows for the investigation of how local information-processing goals can drive the emergence of complex network-level functions.
The framework is based on Partial Information Decomposition (PID), an advanced information-theoretic approach that decomposes the information shared between multiple sources and a target into unique, redundant, and synergistic contributions [2]. By parameterizing the local objective function with PID [3], neurons can learn to integrate information selectively from multiple input sources — feedforward, feedback, and lateral — based on whether the contributions are unique, redundant, or synergistic.
This methodology enables the formation of highly versatile 'infomorphic' neural networks, where each neuron's learning objective is defined by a weighted sum of PID terms. We show that these weights parameterizing the goal function can be derived through intuitive reasoning or optimized numerically to suit specific tasks, thereby providing insights into the local information processing required for a global objective. This allows the construction of versatile infomorphic networks capable of handling supervised, unsupervised, and memory-based learning tasks, and a performance on par with traditional backpropagation methods in larger neural networks — while maintaining neuron-level interpretability.
Our findings highlight the potential of infomorphic networks as a robust platform for advancing the understanding of both artificial and biological neural systems. By illuminating local learning mechanisms, this work establishes a principled information-theoretic foundation for the development of more interpretable and adaptable neural networks, and ultimately enhancing the comprehension of the relationship between global objectives and local learning strategies.