Biomechanical Model
biomechanical model
NMC4 Short Talk: What can deep reinforcement learning tell us about human motor learning and vice-versa ?
In the deep reinforcement learning (RL) community, motor control problems are usually approached from a reward-based learning perspective. However, humans are often believed to learn motor control through directed error-based learning. Within this learning setting, the control system is assumed to have access to exact error signals and their gradients with respect to the control signal. This is unlike reward-based learning, in which errors are assumed to be unsigned, encoding relative successes and failures. Here, we try to understand the relation between these two approaches, reward- and error- based learning, and ballistic arm reaches. To do so, we test canonical (deep) RL algorithms on a well-known sensorimotor perturbation in neuroscience: mirror-reversal of visual feedback during arm reaching. This test leads us to propose a potentially novel RL algorithm, denoted as model-based deterministic policy gradient (MB-DPG). This RL algorithm draws inspiration from error-based learning to qualitatively reproduce human reaching performance under mirror-reversal. Next, we show MB-DPG outperforms the other canonical (deep) RL algorithms on a single- and a multi- target ballistic reaching task, based on a biomechanical model of the human arm. Finally, we propose MB-DPG may provide an efficient computational framework to help explain error-based learning in neuroscience.
Reverse engineering neural control of movement in Hydra
Hydra is a fascinating model organism for neuroscience. It is transparent; new genetic lines allow one to image activity in both neurons (Dupre and Yuste, 2017) and muscle cells (Szymanski and Yuste, 2019) ; it exhibits rich behavior, and it continually rebuilds itself. Hydra’s fairly simply physical structure as a two-layered fluid-filled hydrostat and the accessibility of information about neural and muscle activity opens the possibility of a complete model of neural control of behavior. This requires understanding the transformations that occur in the muscle cell layers and a biomechanical model of the body column. We show that we can use this modeling to reverse engineer how neural activity drives behavior.
A data-driven biomechanical modeling and optimization pipeline for studying salamander locomotions
COSYNE 2025