TopicNeuro

deep reinforcement learning

13 ePosters4 Seminars

Latest

SeminarNeuroscienceRecording

NMC4 Short Talk: What can deep reinforcement learning tell us about human motor learning and vice-versa ?

Michele Garibbo
University of Bristol
Dec 1, 2021

In the deep reinforcement learning (RL) community, motor control problems are usually approached from a reward-based learning perspective. However, humans are often believed to learn motor control through directed error-based learning. Within this learning setting, the control system is assumed to have access to exact error signals and their gradients with respect to the control signal. This is unlike reward-based learning, in which errors are assumed to be unsigned, encoding relative successes and failures. Here, we try to understand the relation between these two approaches, reward- and error- based learning, and ballistic arm reaches. To do so, we test canonical (deep) RL algorithms on a well-known sensorimotor perturbation in neuroscience: mirror-reversal of visual feedback during arm reaching. This test leads us to propose a potentially novel RL algorithm, denoted as model-based deterministic policy gradient (MB-DPG). This RL algorithm draws inspiration from error-based learning to qualitatively reproduce human reaching performance under mirror-reversal. Next, we show MB-DPG outperforms the other canonical (deep) RL algorithms on a single- and a multi- target ballistic reaching task, based on a biomechanical model of the human arm. Finally, we propose MB-DPG may provide an efficient computational framework to help explain error-based learning in neuroscience.

SeminarNeuroscienceRecording

E-prop: A biologically inspired paradigm for learning in recurrent networks of spiking neurons

Franz Scherr
Technische Universität Graz
Aug 31, 2020

Transformative advances in deep learning, such as deep reinforcement learning, usually rely on gradient-based learning methods such as backpropagation through time (BPTT) as a core learning algorithm. However, BPTT is not argued to be biologically plausible, since it requires to a propagate gradients backwards in time and across neurons. Here, we propose e-prop, a novel gradient-based learning method with local and online weight update rules for recurrent neural networks, and in particular recurrent spiking neural networks (RSNNs). As a result, e-prop has the potential to provide a substantial fraction of the power of deep learning to RSNNs. In this presentation, we will motivate e-prop from the perspective of recent insights in neuroscience and show how these have to be combined to form an algorithm for online gradient descent. The mathematical results will be supported by empirical evidence in supervised and reinforcement learning tasks. We will also discuss how limitations that are inherited from gradient-based learning methods, such as sample-efficiency, can be addressed by considering an evolution-like optimization that enhances learning on particular task families. The emerging learning architecture can be used to learn tasks by a single demonstration, hence enabling one-shot learning.

ePosterNeuroscience

How Do Bees See the World? A (Normative) Deep Reinforcement Learning Model for Insect Navigation

Stephan Lochner, Andrew Straw

Bernstein Conference 2024

ePosterNeuroscience

Competition and integration of sensory signals in a deep reinforcement learning agent

Sandhiya Vijayabaskaran, Sen Cheng

Bernstein Conference 2024

ePosterNeuroscience

Deep Reinforcement Learning mimics Neural Strategies for Limb Movements

Muhammad Noman Almani,Shreya Saxena

COSYNE 2022

ePosterNeuroscience

Integrating deep reinforcement learning agents with the C. elegans nervous system

Chenguang Li,Gabriel Kreiman,Sharad Ramanathan

COSYNE 2022

ePosterNeuroscience

Integrating deep reinforcement learning agents with the C. elegans nervous system

Chenguang Li,Gabriel Kreiman,Sharad Ramanathan

COSYNE 2022

ePosterNeuroscience

Time cell encoding in deep reinforcement learning agents depends on mnemonic demands

Dongyan Lin,Blake Richards

COSYNE 2022

ePosterNeuroscience

Time cell encoding in deep reinforcement learning agents depends on mnemonic demands

Dongyan Lin,Blake Richards

COSYNE 2022

ePosterNeuroscience

Cortical dopamine enables deep reinforcement learning and leverages dopaminergic heterogeneity

Jack Lindsey & Ashok Litwin-Kumar

COSYNE 2023

ePosterNeuroscience

Modelling ecological constraints on visual processing with deep reinforcement learning

Sacha Sokoloski, Jure Majnik, Thomas Euler, Philipp Berens

COSYNE 2023

ePosterNeuroscience

Deep reinforcement learning trains agents to track odor plumes with active sensing

Lawrence Jianqiao Hu, Elliott Abe, Harsha Gurnani, Daniel Sitonic, Floris van Breugel, Edgar Y. Walker, Bing Brunton

COSYNE 2025

ePosterNeuroscience

A GPU-Accelerated Deep Reinforcement Learning Pipeline for Simulating Animal Behavior

Charles Zhang, Elliott Abe, Jason Foat, Bing Brunton, Talmo Pereira, Bence Olveczky, Emil Warnberg

COSYNE 2025

ePosterNeuroscience

Modeling the sensorimotor system with deep reinforcement learning

Alessandro Marin Vargas, Alberto Silvio Chiappa, Alexander Mathis

FENS Forum 2024

ePosterNeuroscience

Deep Reinforcement Learning for anatomically accurate musculoskeletal models to investigate neural control of movement across animal species

Muhammad Noman Almani

Neuromatch 5

deep reinforcement learning coverage

17 items

ePoster13
Seminar4
Domain spotlight

Explore how deep reinforcement learning research is advancing inside Neuro.

Visit domain