ePoster

Principled credit assignment with strong feedback through Deep Feedback Control

Alexander Meulemans,Matilde Tristany Farinha,María R. Cervera,João Sacramento,Benjamin F. Grewe
COSYNE 2022(2022)
Lisbon, Portugal
Presented: Mar 17, 2022

Conference

COSYNE 2022

Lisbon, Portugal

Resources

Authors & Affiliations

Alexander Meulemans,Matilde Tristany Farinha,María R. Cervera,João Sacramento,Benjamin F. Grewe

Abstract

The success of deep learning sparked interest in whether the brain similarly learns its hierarchical representations. However, current biologically-plausible models for hierarchical credit assignment (HCA) --i.e., determining how to adjust synapses across hierarchies-- assume that the effect of feedback on forward processing is negligible. This weak feedback assumption is problematic in biologically realistic noisy environments and is at odds with experimental evidence showing that the effect of feedback on neural activities can be strong. To overcome this limitation, we revisit the recent Deep Feedback Control (DFC) method. In DFC, a feedback controller nudges a deep neural network to match a desired output target and uses the resulting control signal for HCA through a learning rule local in space and time. Unlike DFC, we now let feedback strongly influence the neural activity, by taking the supervised label as target instead of a nudged output, thereby invalidating the original theoretical foundation of DFC. Using the implicit function theorem, we show that DFC with strong feedback gradually reduces the amount of feedback required from the controller, resulting in a novel view of learning that can be intuitively understood as help minimization. Further, we show that overcoming the need for help is equivalent to achieving zero output loss with a traditional training objective. We complement our theory with standard computer-vision experiments, showing competitive performance to less biologically-plausible methods like backpropagation and standard DFC. To summarize, by drawing inspiration of how feedback affects neural activity in the brain and by combining dynamical systems and optimization theory, we offer a new theoretical framework to investigate how the brain can learn hierarchical representations through principled optimization. This initiates a novel line of research that can lead to testable experimental predictions, such as the presence of feedback that substantially changes neural activity and whose magnitude decreases with learning.

Unique ID: cosyne-22/principled-credit-assignment-with-strong-a72d9c32