Platform

  • Search
  • Seminars
  • Conferences
  • Jobs

Resources

  • Submit Content
  • About Us

© 2025 World Wide

Open knowledge for all • Started with World Wide Neuro • A 501(c)(3) Non-Profit Organization

Analytics consent required

World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.

Review the Privacy Policy for details about analytics processing.

World Wide
SeminarsConferencesWorkshopsCoursesJobsMapsFeedLibrary
Back to SeminarsBack
Seminar✓ Recording AvailableNeuroscience

NMC4 Short Talk: What can deep reinforcement learning tell us about human motor learning and vice-versa ?

Michele Garibbo

Graduate Student

University of Bristol

Schedule
Wednesday, December 1, 2021

Showing your local timezone

Schedule

Wednesday, December 1, 2021

4:30 AM America/New_York

Watch recording
Host: Neuromatch 4

Watch the seminar

Your browser does not support the video tag.

Recording provided by the organiser.

Event Information

Domain

Neuroscience

Original Event

View source

Host

Neuromatch 4

Duration

15 minutes

Abstract

In the deep reinforcement learning (RL) community, motor control problems are usually approached from a reward-based learning perspective. However, humans are often believed to learn motor control through directed error-based learning. Within this learning setting, the control system is assumed to have access to exact error signals and their gradients with respect to the control signal. This is unlike reward-based learning, in which errors are assumed to be unsigned, encoding relative successes and failures. Here, we try to understand the relation between these two approaches, reward- and error- based learning, and ballistic arm reaches. To do so, we test canonical (deep) RL algorithms on a well-known sensorimotor perturbation in neuroscience: mirror-reversal of visual feedback during arm reaching. This test leads us to propose a potentially novel RL algorithm, denoted as model-based deterministic policy gradient (MB-DPG). This RL algorithm draws inspiration from error-based learning to qualitatively reproduce human reaching performance under mirror-reversal. Next, we show MB-DPG outperforms the other canonical (deep) RL algorithms on a single- and a multi- target ballistic reaching task, based on a biomechanical model of the human arm. Finally, we propose MB-DPG may provide an efficient computational framework to help explain error-based learning in neuroscience.

Topics

ballistic arm reachesdeep reinforcement learningerror-based learninghuman reaching performancemirror-reversalmotor learningreward-based learningsensorimotor perturbation

About the Speaker

Michele Garibbo

Graduate Student

University of Bristol

Contact & Resources

Personal Website

research-information.bris.ac.uk/en/persons/michele-garibbo

Related Seminars

Seminar60%

Knight ADRC Seminar

neuro

Jan 20, 2025
Washington University in St. Louis, Neurology
Seminar60%

TBD

neuro

Jan 20, 2025
King's College London
Seminar60%

Guiding Visual Attention in Dynamic Scenes

neuro

Jan 20, 2025
Haifa U
January 2026
Full calendar →