Resources
Authors & Affiliations
Carlos Stein N Brito, Daniel McNamee
Abstract
We propose a novel dual-model framework that integrates reinforcement learning (RL) and adaptive control to model cerebellar function in motor learning and adaptation. Our approach distinguishes between the roles of the basal ganglia and cerebellum, addressing limitations of current models in complex motor control tasks. We introduce a decomposition of the value function where the RL component optimizes the mean expected value while the cerebellar module minimizes deviations from optimal latent trajectories. This decomposition approximates a normative information-theoretic objective where the cerebellar module acts as a secondary policy maximizing mutual information between the RL policy and future latent states. The framework operates through a learned latent space interface between modules, analogous to thalamic circuits connecting basal ganglia and cerebellum. To enable stable controller learning, we developed a dynamic action cost mechanism that ensures more naturalistic behavior for RL models. Through simulations of complex motor tasks in physics engines, we demonstrate that our model replicates key behavioral properties of cerebellar function and dysfunction. Similar to cerebellar degeneration, ablation of the control module leads to ataxia-like symptoms when faced with time-varying perturbations, while intact models show rapid adaptation and error correction. Our model offers insights into cerebellar computation, particularly addressing the high-dimensional error signals observed in the inferior olive, and extends optimal feedback control theory to include learning and adaptation in non-stationary conditions. By bridging RL and control theory in a neuroscience context, our work advances the understanding of cerebellar computation and multi-area coordination in motor control, providing a unified perspective on cerebellar involvement in motor learning and adaptation.