Resources
Authors & Affiliations
Sofia Pereira da Silva, Denis Alevi, Friedrich Schuessler, Henning Sprekeler
Abstract
By causally mapping neural activity to behavior [1], Brain-Computer Interfaces (BCI) offer a means to study the dynamics of sensorimotor learning. Here, we combine computational modeling and data analysis to study the neural learning algorithm [2] monkeys use to adapt to a changed output mapping in a center-out reaching task. We exploit that the mapping from neural space (ca. 100 dimensions) to the 2D cursor position is a credit assignment problem [3] that is underconstrained, because changes along a large number of output-null dimensions do not influence the behavioral output. We hypothesized that different, but equally performing learning algorithms can be distinguished by the changes they generate in output-null dimensions. We study this idea in networks for three different learning rules (gradient descent, model-based feedback alignment, and reinforcement learning) and three different network architectures that reflect distinct learning strategies (re-aiming [4], remodeling [5], recurrent dynamics). We find that various combinations of rules and architectures lead to changes in different low-dimensional subspaces of neural activity. Comparing these changes in neural activity and their subspaces with available data from BCI experiments [6, 7, 8] suggests that monkeys employ a combination of distinct strategies to learn BCI tasks.