Resources
Authors & Affiliations
Chanwoo Chun, Christian Grussler, Daniel Lee
Abstract
Numerous biologically inspired models for neurons have been developed over the years, from the Hodgkin-Huxley equations to the McCulloch-Pitts model, spanning varying degrees of computational complexity. In particular, Rosenblatt's perceptron model has endured as a standard neuron model used in machine learning. In this work, we show how perceptron models of varying complexity can be derived by viewing a biological neuron as a bang-off optimal controller.
We consider a single neuron as a control agent with binary states---namely, firing and quiescent states---embedded in a closed-loop system consisting of networks of other neurons and a physical external environment (Moore 2024). We derive the optimal decision boundary of a binary controller in regulating the discounted quadratic cost of a linear discrete-time system. In this framework, we prove that the Rosenblatt perceptron model with a linear decision boundary emerges as the optimal model of a neuron when the discount factor is small.
We then consider a more general control problem with a nonlinear system and a large discount factor. To this end, we find that Pontryagin's Maximum Principle (PMP) can be used to show that the optimal controller can be approximated by a deep neural network with a nonlinear readout, and we propose an objective that optimizes the neural network model. Our framework can be seen as a viable alternative to the policy-gradient framework, especially in cases where information regarding the system is known. By directly learning the optimal policy, we avoid needing to learn the value function, which can be difficult to estimate.