TopicNeuro

attractor neural network

2 Seminars2 ePosters

Latest

SeminarNeuroscienceRecording

Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions

Kevin Berlemont
Wang Lab, NYU Center for Neural Science
Sep 21, 2022

Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.

SeminarNeuroscienceRecording

A robust neural integrator based on the interactions of three time scales

Bard Ermentrout
University of Pittsburgh
Nov 11, 2020

Neural integrators are circuits that are able to code analog information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights. In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in recordings of animals. I show the mechanism for this can be explained by the instability regions of the Mathieu equation. I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.

ePosterNeuroscience

Attractor neural networks with metastable synapses

Yu Feng,Nicolas Brunel

COSYNE 2022

ePosterNeuroscience

Manifold representation in continuous attractor neural networks: a general constructive approach

Federico Claudi, Sarthak Chandra, Ila Fiete

COSYNE 2023

attractor neural network coverage

4 items

Seminar2
ePoster2
Domain spotlight

Explore how attractor neural network research is advancing inside Neuro.

Visit domain