← Back

Nonlinear Dynamics

Topic spotlight
TopicWorld Wide

nonlinear dynamics

Discover seminars, jobs, and research tagged with nonlinear dynamics across World Wide.
5 curated items3 Seminars2 Positions
Updated 1 day ago
5 items · nonlinear dynamics
5 results
PositionComputational Neuroscience

Dr Panayiota Poirazi

University of Crete
Crete, Greece
Dec 5, 2025

The successful applicant will build a simulation model of the rodent visual cortex and use it to assess the role of dendritic nonlinearities on the connectivity and activity properties of the resulting memory engrams. Selected model predictions will be tested in headfixed behaving animals performing a visual discrimination task. For more information see: https://www.smartnets-etn.eu/role-of-dendritic-nonlinearities-in-v1-network-properties-after-visual-learning/

PositionComputational Neuroscience

Simona Olmi

Institute for Complex Systems, National Research Council
Florence, Italy
Dec 5, 2025

The Institute for Complex Systems at the National Research Council in Florence (Italy) invites applications for a one year Postdoctoral Scholar in the fields of computational neuroscience, complex networks and nonlinear dynamics. The successful applicant is expected to work closely with a multidisciplinary research team led by Dr. Simona Olmi on problems related to neuroscience. Specific topics of interest include but are not limited to the investigation of biologically realistic large-scale brain activity, the emergence of coupling between neural oscillations in neural architectures, the derivation of neural mass models in presence of short-term synaptic plasticity and/or spike-frequency adaptation as well as applications on brain structural connectivity matrices. Successful candidates are expected to primarily conduct computational and data driven research taking advantage of our international network of experimental collaborators and/or clinical partners.

SeminarNeuroscienceRecording

Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions

Kevin Berlemont
Wang Lab, NYU Center for Neural Science
Sep 20, 2022

Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.

SeminarNeuroscience

Modularity of attractors in inhibition-dominated TLNs

Carina Curto
The Pennsylvania State University
Apr 18, 2021

Threshold-linear networks (TLNs) display a wide variety of nonlinear dynamics including multistability, limit cycles, quasiperiodic attractors, and chaos. Over the past few years, we have developed a detailed mathematical theory relating stable and unstable fixed points of TLNs to graph-theoretic properties of the underlying network. In particular, we have discovered that a special type of unstable fixed points, corresponding to "core motifs," are predictive of dynamic attractors. Recently, we have used these ideas to classify dynamic attractors in a two-parameter family of inhibition-dominated TLNs spanning all 9608 directed graphs of size n=5. Remarkably, we find a striking modularity in the dynamic attractors, with identical or near-identical attractors arising in networks that are otherwise dynamically inequivalent. This suggests that, just as one can store multiple static patterns as stable fixed points in a Hopfield model, a variety of dynamic attractors can also be embedded in a TLN in a modular fashion.