← Back

Recurrent Network Model

Topic spotlight
TopicWorld Wide

recurrent network model

Discover seminars, jobs, and research tagged with recurrent network model across World Wide.
12 curated items7 Seminars5 ePosters
Updated about 2 years ago
12 items · recurrent network model
12 results
SeminarNeuroscience

A recurrent network model of planning predicts hippocampal replay and human behavior

Marcelo Mattar
NYU
Oct 19, 2023

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as `rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by -- and in turn adaptively affect -- prefrontal dynamics.

SeminarNeuroscience

A recurrent network model of planning explains hippocampal replay and human behavior

Guillaume Hennequin
University of Cambridge, UK
May 30, 2023

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.

SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 21, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

SeminarNeuroscienceRecording

Frontal circuit specialisations for information search and decision making

Laurence Hunt
Oxford University
Jan 27, 2022

During primate evolution, prefrontal cortex (PFC) expanded substantially relative to other cortical areas. The expansion of PFC circuits likely supported the increased cognitive abilities of humans and anthropoids to sample information about their environment, evaluate that information, plan, and decide between different courses of action. What quantities do these circuits compute as information is being sampled towards and a decision is being made? And how can they be related to anatomical specialisations within and across PFC? To address this, we recorded PFC activity during value-based decision making using single unit recording in non-human primates and magnetoencephalography in humans. At a macrocircuit level, we found that value correlates differ substantially across PFC subregions. They are heavily shaped by each subregion’s anatomical connections and by the decision-maker’s current locus of attention. At a microcircuit level, we found that the temporal evolution of value correlates can be predicted using cortical recurrent network models that temporally integrate incoming decision evidence. These models reflect the fact that PFC circuits are highly recurrent in nature and have synaptic properties that support persistent activity across temporally extended cognitive tasks. Our findings build upon recent work describing economic decision making as a process of attention-weighted evidence integration across time.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 1, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

Frontal circuit specialisations for decision making

Laurence Hunt
University of Oxford
May 26, 2021

During primate evolution, prefrontal cortex (PFC) expanded substantially relative to other cortical areas. The expansion of PFC circuits likely supported the increased cognitive abilities of humans and anthropoids to plan, evaluate, and decide between different courses of action. But what do these circuits compute as a decision is being made, and how can they be related to anatomical specialisations within and across PFC? To address this, we recorded PFC activity during value-based decision making using single unit recording in non-human primates and magnetoencephalography in humans. At a macrocircuit level, we found that value correlates differ substantially across PFC subregions. They are heavily shaped by each subregion’s anatomical connections and by the decision-maker’s current locus of attention. At a microcircuit level, we found that the temporal evolution of value correlates can be predicted using cortical recurrent network models that temporally integrate incoming decision evidence. These models reflect the fact that PFC circuits are highly recurrent in nature and have synaptic properties that support persistent activity across temporally extended cognitive tasks. Our findings build upon recent work describing economic decision making as a process of attention-weighted evidence integration across time.

SeminarNeuroscienceRecording

Recurrent network models of adaptive and maladaptive learning

Kanaka Rajan
Icahn School of Medicine at Mount Sinai
Apr 7, 2020

During periods of persistent and inescapable stress, animals can switch from active to passive coping strategies to manage effort-expenditure. Such normally adaptive behavioural state transitions can become maladaptive in disorders such as depression. We developed a new class of multi-region recurrent neural network (RNN) models to infer brain-wide interactions driving such maladaptive behaviour. The models were trained to match experimental data across two levels simultaneously: brain-wide neural dynamics from 10-40,000 neurons and the realtime behaviour of the fish. Analysis of the trained RNN models revealed a specific change in inter-area connectivity between the habenula (Hb) and raphe nucleus during the transition into passivity. We then characterized the multi-region neural dynamics underlying this transition. Using the interaction weights derived from the RNN models, we calculated the input currents from different brain regions to each Hb neuron. We then computed neural manifolds spanning these input currents across all Hb neurons to define subspaces within the Hb activity that captured communication with each other brain region independently. At the onset of stress, there was an immediate response within the Hb/raphe subspace alone. However, RNN models identified no early or fast-timescale change in the strengths of interactions between these regions. As the animal lapsed into passivity, the responses within the Hb/raphe subspace decreased, accompanied by a concomitant change in the interactions between the raphe and Hb inferred from the RNN weights. This innovative combination of network modeling and neural dynamics analysis points to dual mechanisms with distinct timescales driving the behavioural state transition: early response to stress is mediated by reshaping the neural dynamics within a preserved network architecture, while long-term state changes correspond to altered connectivity between neural ensembles in distinct brain regions.

ePoster

Conditions for sequence replay in recurrent network models of CA3

Gaspar Cano, Richard Kempter

Bernstein Conference 2024

ePoster

Reverse engineering recurrent network models reveals mechanisms for location memory

Ian Hawes, Matt Nolan

Bernstein Conference 2024

ePoster

Barcode activity in a recurrent network model of the hippocampus enables efficient memory binding

Ching Fang, Jack Lindsey, Larry Abbott, Dmitriy Aronov, Selmaan Chettih

COSYNE 2025

ePoster

Enhancing the causal predictive power in recurrent network models of neural dynamics

Jiayi Zhang, Tatiana Engel

COSYNE 2025

ePoster

Reverse engineering recurrent network models reveals mechanisms for location memory

Ian Hawes, Matthew Nolan

FENS Forum 2024