Resources
Authors & Affiliations
Klara Kaleb,Claudia Clopath
Abstract
Learning in cortical multi-layer networks is thought to occur through spatially and temporally local synaptic plasticity aided by ubiquitous feedback connections from upper layers. However, the known biologically plausible learning rules for such networks are yet to reach the high performance of the machine learning algorithms, such as backpropagation [1], on difficult tasks that the brain is capable of solving. It has been shown that one of the obstacles to backpropagation-like algorithm implementation in the brain, the ’weight transport’ problem, can be overcome with initially random feedback connections [2] undergoing local synaptic plasticity [3,4]. Nevertheless, the gap in performance still remains [4,5]. Here, we use the optimization framework to further explore the space of effective biologically plausible feedback learning rules. More precisely, we meta-learn a parametric function defining the learning rule using gradient descent, building upon previous work with forward learning rules [6,7,8,9]. First, we show that we can rediscover a simpler variant of a previously proposed feedback learning rule [4]. We then extend our approach to learning rules that can encompass previously unexplored terms as well as increased biological realism, such as coupled forward and feedback computation. In summary, we show that meta-learning of learning rules can also be applied to the plasticity of feedback connections and provide insights on how multi-layer learning may be orchestrated in the brain.