← Back

Dynamics

Topic spotlight
TopicWorld Wide

dynamics

Discover seminars, jobs, and research tagged with dynamics across World Wide.
105 curated items60 Seminars40 ePosters5 Positions
Updated in 6 days
105 items · dynamics
105 results
SeminarNeuroscience

Consciousness at the edge of chaos

Martin Monti
University of California Los Angeles
Dec 11, 2025

Over the last 20 years, neuroimaging and electrophysiology techniques have become central to understanding the mechanisms that accompany loss and recovery of consciousness. Much of this research is performed in the context of healthy individuals with neurotypical brain dynamics. Yet, a true understanding of how consciousness emerges from the joint action of neurons has to account for how severely pathological brains, often showing phenotypes typical of unconsciousness, can nonetheless generate a subjective viewpoint. In this presentation, I will start from the context of Disorders of Consciousness and will discuss recent work aimed at finding generalizable signatures of consciousness that are reliable across a spectrum of brain electrophysiological phenotypes focusing in particular on the notion of edge-of-chaos criticality.

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 9, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

PositionNeuroscience

Prof. Carmen Varela

Florida Atlantic University
Jupiter, Florida
Dec 5, 2025

Gain expertise in rodent electrophysiology and behavior studying thalamic cellular and network mechanisms of sleep and memory consolidation. We have several openings to study the mechanisms of synaptic plasticity and cellular spike dynamics that contribute to episodic memory consolidation during sleep. Trainees will gain expertise in systems neuroscience using electrophysiology (cell ensemble and LFP recording) and behavior in rats, as well as expertise on the thalamic molecular and cellular mechanisms underlying normal and disrupted sleep-dependent memory consolidation and the use of non-invasive technologies to regulate them. Some of the projects are part of collaborations with Harvard University and the Scripps Florida Institute.

PositionRobotics

N/A

Institute of Robotics and Cognitive Systems, University of Lübeck
University of Lübeck, Germany
Dec 5, 2025

The Institute of Robotics and Cognitive Systems at the University of Lübeck has a vacancy for an Assistant Professorship (Juniorprofessur) Tenure Track W2 for Robotics for an initial period of three years with an option to extend for a further three years. The future holder of the position should represent the field of robotics in research and teaching. Furthermore, the holder of the professorship shall establish their own working group at the Institute of Robotics and Cognitive Systems. The future holder of the position should have a very good doctorate and demonstrable scientific experience in one or more of the following research areas: Modelling, simulation, and control of robots, Robot kinematics and dynamics, Robot sensor technology, e.g., force and moment sensor technology, Robotic systems, e.g., telerobotic systems, humanoid robots, etc., Soft robotics and continuum robotics, AI and machine learning methods in robotics, Human-robot collaboration and safe autonomous robot systems, AR/VR in robotics, Applications of AI and robotics in medicine. The range of tasks also includes the acquisition of third-party funds and the assumption of project management. The applicant is expected to be scientifically involved in the research focus areas of the institute and the profile areas of the university, especially in the context of projects acquired by the institute itself (public funding, industrial cooperations, etc.). The position holder is expected to be willing to cooperate with the “Lübeck Innovation Hub for Robotic Surgery” (LIROS), the 'Center for Doctoral Studies Lübeck' and the 'Open Lab for Robotics and Imaging in Industry and Medicine' (OLRIM). In teaching, participation in the degree programme 'Robotics and Autonomous Systems' (German-language Bachelor’s, English-language Master’s) as well as the other degree programmes of the university’s STEM sections is expected.

Position

N/A

University Centre of Excellence “Dynamics, Mathematical Analysis and Artificial Intelligence” - Nicolaus Copernicus University
Nicolaus Copernicus University, Toruń, Poland
Dec 5, 2025

The Director of Excellence Center “Dynamics, Mathematical Analysis and Artificial Intelligence” announces the contest for five 6 or 12 months grants for young researchers from abroad. The expected realization of the grant commences on 1.10.2024, with the possibility to extend the post-doc position by one year. The starting date can be reconsidered.

SeminarPsychology

Digital Traces of Human Behaviour: From Political Mobilisation to Conspiracy Narratives

Lukasz Piwek
University of Bath & Cumulus Neuroscience Ltd
Jul 6, 2025

Digital platforms generate unprecedented traces of human behaviour, offering new methodological approaches to understanding collective action, polarisation, and social dynamics. Through analysis of millions of digital traces across multiple studies, we demonstrate how online behaviours predict offline action: Brexit-related tribal discourse responds to real-world events, machine learning models achieve 80% accuracy in predicting real-world protest attendance from digital signals, and social validation through "likes" emerges as a key driver of mobilization. Extending this approach to conspiracy narratives reveals how digital traces illuminate psychological mechanisms of belief and community formation. Longitudinal analysis of YouTube conspiracy content demonstrates how narratives systematically address existential, epistemic, and social needs, while examination of alt-tech platforms shows how emotions of anger, contempt, and disgust correlate with violence-legitimating discourse, with significant differences between narratives associated with offline violence versus peaceful communities. This work establishes digital traces as both methodological innovation and theoretical lens, demonstrating that computational social science can illuminate fundamental questions about polarisation, mobilisation, and collective behaviour across contexts from electoral politics to conspiracy communities.

SeminarNeuroscience

“Brain theory, what is it or what should it be?”

Prof. Guenther Palm
University of Ulm
Jun 26, 2025

n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.

SeminarNeuroscience

Neural circuits underlying sleep structure and functions

Antoine Adamantidis
University of Bern
Jun 12, 2025

Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.

SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 22, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscience

Dopaminergic Network Dynamics

Veronica Alvarez & Anders Borgkvist
National Institute of Mental Health resp Karolinska Institutet
Apr 24, 2025
SeminarNeuroscience

Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics

Shaul Druckmann
Stanford department of Neurobiology and department of Psychiatry and Behavioral Sciences
Apr 22, 2025

Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.

SeminarArtificial IntelligenceRecording

Computational modelling of ocular pharmacokinetics

Arto Urtti
School of Pharmacy, University of Eastern Finland
Apr 21, 2025

Pharmacokinetics in the eye is an important factor for the success of ocular drug delivery and treatment. Pharmacokinetic features determine the feasible routes of drug administration, dosing levels and intervals, and it has impact on eventual drug responses. Several physical, biochemical, and flow-related barriers limit drug exposure of anterior and posterior ocular target tissues during treatment during local (topical, subconjunctival, intravitreal) and systemic administration (intravenous, per oral). Mathematical models integrate joint impact of various barriers on ocular pharmacokinetics (PKs) thereby helping drug development. The models are useful in describing (top-down) and predicting (bottom-up) pharmacokinetics of ocular drugs. This is useful also in the design and development of new drug molecules and drug delivery systems. Furthermore, the models can be used for interspecies translation and probing of disease effects on pharmacokinetics. In this lecture, ocular pharmacokinetics and current modelling methods (noncompartmental analyses, compartmental, physiologically based, and finite element models) are introduced. Future challenges are also highlighted (e.g. intra-tissue distribution, prediction of drug responses, active transport).

SeminarNeuroscience

Where are you Moving? Assessing Precision, Accuracy, and Temporal Dynamics in Multisensory Heading Perception Using Continuous Psychophysics

Björn Jörges
York University
Feb 5, 2025
SeminarNeuroscienceRecording

Dynamics of braille letter perception in blind readers

Santani Teng
Smith-Kettlewell Eye Research Institute
Jan 22, 2025
SeminarNeuroscience

Mapping the neural dynamics of dominance and defeat

Annegret Falkner
Princeton Neuroscience Institute, USA
Dec 11, 2024

Social experiences can have lasting changes on behavior and affective state. In particular, repeated wins and losses during fighting can facilitate and suppress future aggressive behavior, leading to persistent high aggression or low aggression states. We use a combination of techniques for multi-region neural recording, perturbation, behavioral analysis, and modeling to understand how nodes in the brain’s subcortical “social decision-making network” encode and transform aggressive motivation into action, and how these circuits change following social experience.

SeminarNeuroscience

The Brain Prize winners' webinar

Larry Abbott, Haim Sompolinsky, Terry Sejnowski
Columbia University; Harvard University / Hebrew University; Salk Institute
Nov 29, 2024

This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.

SeminarNeuroscience

Learning and Memory

Nicolas Brunel, Ashok Litwin-Kumar, Julijana Gjeorgieva
Duke University; Columbia University; Technical University Munich
Nov 28, 2024

This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.

SeminarNeuroscience

Brain-Wide Compositionality and Learning Dynamics in Biological Agents

Kanaka Rajan
Harvard Medical School
Nov 12, 2024

Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.

SeminarNeuroscience

Unmotivated bias

William Cunningham
University of Toronto
Nov 11, 2024

In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.

SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 15, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarOpen SourceRecording

Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms

Priya Soneji
Georgia Institute of Technology
Oct 7, 2024

Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.

SeminarPsychology

Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag

Lukas Huber
University of Bern
Sep 22, 2024

Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge—that is, the behavioral changes and intermediate stages observed during the acquisition—is less often directly and empirically compared. In this talk, I'm going to report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.

SeminarNeuroscience

Probing neural population dynamics with recurrent neural networks

Chethan Pandarinath
Emory University and Georgia Tech
Jun 11, 2024

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.

SeminarPsychology

Exploring Lifespan Memory Development and Intervention Strategies for Memory Decline through a Unified Model-Based Assessment

Anaïs Capik
University of Washington
May 5, 2024

Understanding and potentially reversing memory decline necessitates a comprehensive examination of memory's evolution throughout life. Traditional memory assessments, however, suffer from a lack of comparability across different age groups due to the diverse nature of the tests employed. Addressing this gap, our study introduces a novel, ACT-R model-based memory assessment designed to provide a consistent metric for evaluating memory function across a lifespan, from 5 to 85-year-olds. This approach allows for direct comparison across various tasks and materials tailored to specific age groups. Our findings reveal a pronounced U-shaped trajectory of long-term memory function, with performance at age 5 mirroring those observed in elderly individuals with impairments, highlighting critical periods of memory development and decline. Leveraging this unified assessment method, we further investigate the therapeutic potential of rs-fMRI-guided TBS targeting area 8AV in individuals with early-onset Alzheimer’s Disease—a region implicated in memory deterioration and mood disturbances in this population. This research not only advances our understanding of memory's lifespan dynamics but also opens new avenues for targeted interventions in Alzheimer’s Disease, marking a significant step forward in the quest to mitigate memory decay.

SeminarNeuroscienceRecording

There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm

Anne Danielsen
University of Oslo, Norway
Apr 28, 2024

How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.

SeminarNeuroscienceRecording

Cell-type-specific plasticity shapes neocortical dynamics for motor learning

Shouvik Majumder
Max Planck Florida Institute of Neuroscience, USA
Apr 17, 2024

How do cortical circuits acquire new dynamics that drive learned movements? This webinar will focus on mouse premotor cortex in relation to learned lick-timing and explore high-density electrophysiology using our silicon neural probes alongside region and cell-type-specific acute genetic manipulations of proteins required for synaptic plasticity.

SeminarNeuroscience

Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine

Nelson Spruston
Janelia, Ashburn, USA
Mar 5, 2024

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

SeminarNeuroscience

Neuromodulation of striatal D1 cells shapes BOLD fluctuations in anatomically connected thalamic and cortical regions

Marija Markicevic
Yale
Jan 17, 2024

Understanding how macroscale brain dynamics are shaped by microscale mechanisms is crucial in neuroscience. We investigate this relationship in animal models by directly manipulating cellular properties and measuring whole-brain responses using resting-state fMRI. Specifically, we explore the impact of chemogenetically neuromodulating D1 medium spiny neurons in the dorsomedial caudate putamen (CPdm) on BOLD dynamics within a striato-thalamo-cortical circuit in mice. Our findings indicate that CPdm neuromodulation alters BOLD dynamics in thalamic subregions projecting to the dorsomedial striatum, influencing both local and inter-regional connectivity in cortical areas. This study contributes to understanding structure–function relationships in shaping inter-regional communication between subcortical and cortical levels.

SeminarNeuroscienceRecording

Neural Mechanisms of Subsecond Temporal Encoding in Primary Visual Cortex

Samuel Post
University of California, Riverside
Nov 28, 2023

Subsecond timing underlies nearly all sensory and motor activities across species and is critical to survival. While subsecond temporal information has been found across cortical and subcortical regions, it is unclear if it is generated locally and intrinsically or if it is a read out of a centralized clock-like mechanism. Indeed, mechanisms of subsecond timing at the circuit level are largely obscure. Primary sensory areas are well-suited to address these question as they have early access to sensory information and provide minimal processing to it: if temporal information is found in these regions, it is likely to be generated intrinsically and locally. We test this hypothesis by training mice to perform an audio-visual temporal pattern sensory discrimination task as we use 2-photon calcium imaging, a technique capable of recording population level activity at single cell resolution, to record activity in primary visual cortex (V1). We have found significant changes in network dynamics through mice’s learning of the task from naive to middle to expert levels. Changes in network dynamics and behavioral performance are well accounted for by an intrinsic model of timing in which the trajectory of q network through high dimensional state space represents temporal sensory information. Conversely, while we found evidence of other temporal encoding models, such as oscillatory activity, we did not find that they accounted for increased performance but were in fact correlated with the intrinsic model itself. These results provide insight into how subsecond temporal information is encoded mechanistically at the circuit level.

SeminarNeuroscienceRecording

Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment

Mattia Rosso
Ghent University, IPEM Institute for Systematic Musicology
Nov 28, 2023

Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.

SeminarPsychology

Perceptions of responsiveness and rejection in romantic relationships. What are the implications for individuals and relationship functioning?

Marianne Richter
University of Fribourg
Nov 26, 2023

From birth, human beings need to be embedded into social ties to function best, because other individuals can provide us with a sense of belonging, which is a fundamental human need. One of the closest bonds we build throughout our life is with our intimate partners. When the relationship involves intimacy and when both partners accept and support each other’s needs and goals (through perceived responsiveness) individuals experience an increase in relationship satisfaction as well as physical and mental well-being. However, feeling rejected by a partner may impair the feeling of connectedness and belonging, and affect emotional and behavioural responses. When we perceive our partner to be responsive to our needs or desires, in turn we naturally strive to respond positively and adequately to our partner’s needs and desires. This implies that individuals are interdependent, and changes in one partner prompt changes in the other. Evidence suggests that partners regulate themselves and co-regulate each other in their emotional, psychological, and physiological responses. However, such processes may threaten the relationship when partners face stressful situations or interactions, like the transition to parenthood or rejection. Therefore, in this presentation, I will provide evidence for the role of perceptions of being accepted or rejected by a significant other on individual and relationship functioning, while considering the contextual settings. The three studies presented here explore romantic relationships, and how perceptions of rejection and responsiveness from the partner impact both individuals, their physiological and their emotional responses, as well as their relationship dynamics.

SeminarNeuroscience

Divergent Recruitment of Developmentally-Defined Neuronal Ensembles Supports Memory Dynamics

Flavio Donato
Biozentrum of the University of Basel, Basel, Switzerland
Nov 22, 2023
SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 20, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarNeuroscience

Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task

Albert Compte
IDIBAPS
Nov 16, 2023

Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.

SeminarArtificial IntelligenceRecording

Mathematical and computational modelling of ocular hemodynamics: from theory to applications

Giovanna Guidoboni
University of Maine
Nov 13, 2023

Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.

SeminarNeuroscienceRecording

State-of-the-Art Spike Sorting with SpikeInterface

Samuel Garcia and Alessio Buccino
CRNS, Lyon, France and Allen Institute for Neural Dynamics, Seattle, USA
Nov 6, 2023

This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.

SeminarNeuroscience

Identifying mechanisms of cognitive computations from spikes

Tatiana Engel
Princeton
Nov 2, 2023

Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.

SeminarNeuroscience

The role of CNS microglia in health and disease

Kyrargyri Vassiliki
Department of Immunology, Laboratory of Molecular Genetics, Hellenic Pasteur Institute, Athens, Greece
Oct 24, 2023

Microglia are the resident CNS macrophages of the brain parenchyma. They have many and opposing roles in health and disease, ranging from inflammatory to anti-inflammatory and protective functions, depending on the developmental stage and the disease context. In Multiple Sclerosis, microglia are involved to important hallmarks of the disease, such as inflammation, demyelination, axonal damage and remyelination, however the exact mechanisms controlling their transformation towards a protective or devastating phenotype during the disease progression remains largely unknown until now. We wish to understand how brain microglia respond to demyelinating insults and how their behaviour changes in recovery. To do so we developed a novel histopathological analysis approach in 3D and a cell-based analysis tool that when applied in the cuprizone model of demyelination revealed region- and disease- dependent changes in microglial dynamics in the brain grey matter during demyelination and remyelination. We now use similar approaches with the aim to unravel sensitive changes in microglial dynamics during neuroinflammation in the EAE model. Furthermore, we employ constitutive knockout and tamoxifen-inducible gene-targeting approaches, immunological techniques, genetics and bioinformatics and currently seek to clarify the specific role of the brain resident microglial NF-κB molecular pathway versus other tissue macrophages in EAE.

SeminarNeuroscience

Brain Connectivity Workshop

Ed Bullmore, Jianfeng Feng, Viktor Jirsa, Helen Mayberg, Pedro Valdes-Sosa
Sep 19, 2023

Founded in 2002, the Brain Connectivity Workshop (BCW) is an annual international meeting for in-depth discussions of all aspects of brain connectivity research. By bringing together experts in computational neuroscience, neuroscience methodology and experimental neuroscience, it aims to improve the understanding of the relationship between anatomical connectivity, brain dynamics and cognitive function. These workshops have a unique format, featuring only short presentations followed by intense discussion. This year’s workshop is co-organised by Wellcome, putting the spotlight on brain connectivity in mental health disorders. We look forward to having you join us for this exciting, thought-provoking and inclusive event.

SeminarNeuroscienceRecording

Self as Processes (BACN Mid-career Prize Lecture 2023)

Jie Sui
University of Aberdeen, UK
Sep 12, 2023

An understanding of the self helps explain not only human thoughts, feelings, attitudes but also many aspects of everyday behaviour. This talk focuses on a viewpoint - self as processes. This viewpoint emphasizes the dynamics of the self that best connects with the development of the self over time and its realist orientation. We are combining psychological experiments and data mining to comprehend the stability and adaptability of the self across various populations. In this talk, I draw on evidence from experimental psychology, cognitive neuroscience, and machine learning approaches to demonstrate why and how self-association affects cognition and how it is modulated by various social experiences and situational factors

SeminarNeuroscience

NeuroAI from model to understanding: revealing the emergence of computations from the collective dynamics of interacting neurons

Surya Ganguli
Stanford University
Sep 12, 2023
SeminarNeuroscienceRecording

Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing

Pulin Gong
The University of Sydney
Aug 10, 2023

The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.

SeminarNeuroscience

In vivo direct imaging of neuronal activity at high temporospatial resolution

Jang-Yeon Park
Sungkyunkwan University, Suwon, Korea
Jun 27, 2023

Advanced noninvasive neuroimaging methods provide valuable information on the brain function, but they have obvious pros and cons in terms of temporal and spatial resolution. Functional magnetic resonance imaging (fMRI) using blood-oxygenation-level-dependent (BOLD) effect provides good spatial resolution in the order of millimeters, but has a poor temporal resolution in the order of seconds due to slow hemodynamic responses to neuronal activation, providing indirect information on neuronal activity. In contrast, electroencephalography (EEG) and magnetoencephalography (MEG) provide excellent temporal resolution in the millisecond range, but spatial information is limited to centimeter scales. Therefore, there has been a longstanding demand for noninvasive brain imaging methods capable of detecting neuronal activity at both high temporal and spatial resolution. In this talk, I will introduce a novel approach that enables Direct Imaging of Neuronal Activity (DIANA) using MRI that can dynamically image neuronal spiking activity in milliseconds precision, achieved by data acquisition scheme of rapid 2D line scan synchronized with periodically applied functional stimuli. DIANA was demonstrated through in vivo mouse brain imaging on a 9.4T animal scanner during electrical whisker-pad stimulation. DIANA with milliseconds temporal resolution had high correlations with neuronal spike activities, which could also be applied in capturing the sequential propagation of neuronal activity along the thalamocortical pathway of brain networks. In terms of the contrast mechanism, DIANA was almost unaffected by hemodynamic responses, but was subject to changes in membrane potential-associated tissue relaxation times such as T2 relaxation time. DIANA is expected to break new ground in brain science by providing an in-depth understanding of the hierarchical functional organization of the brain, including the spatiotemporal dynamics of neural networks.

SeminarNeuroscience

Why “pauses” matter: breaks in respiratory behavior orchestrate piriform network dynamics

Lisa Roux
University of Bordeaux
Jun 18, 2023
SeminarNeuroscience

Movement planning as a window into hierarchical motor control

Katja Kornysheva
Centre for Human Brain (CHBH) at the University of Birmingham, UK
Jun 14, 2023

The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.

SeminarNeuroscience

Computational models of spinal locomotor circuitry

Simon Danner
Drexel University, Philadelphia, USA
Jun 13, 2023

To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.

SeminarNeuroscience

NOTE: DUE TO A CYBER ATTACK OUR UNIVERSITY WEB SYSTEM IS SHUT DOWN - TALK WILL BE RESCHEDULED

Susanne Schoch McGovern
Universität Bonn
Jun 6, 2023

The size and structure of the dendritic arbor play important roles in determining how synaptic inputs of neurons are converted to action potential output and how neurons are integrated in the surrounding neuronal network. Accordingly, neurons with aberrant morphology have been associated with neurological disorders. Dysmorphic, enlarged neurons are, for example, a hallmark of focal epileptogenic lesions like focal cortical dysplasia (FCDIIb) and gangliogliomas (GG). However, the regulatory mechanisms governing the development of dendrites are insufficiently understood. The evolutionary conserved Ste20/Hippo kinase pathway has been proposed to play an important role in regulating the formation and maintenance of dendritic architecture. A key element of this pathway, Ste20-like kinase (SLK), regulates cytoskeletal dynamics in non-neuronal cells and is strongly expressed throughout neuronal development. Nevertheless, its function in neurons is unknown. We found that during development of mouse cortical neurons, SLK has a surprisingly specific role for proper elaboration of higher, ≥ 3rd, order dendrites both in cultured neurons and living mice. Moreover, SLK is required to maintain excitation-inhibition balance. Specifically, SLK knockdown causes a selective loss of inhibitory synapses and functional inhibition after postnatal day 15, while excitatory neurotransmission is unaffected. This mechanism may be relevant for human disease, as dysmorphic neurons within human cortical malformations exhibit significant loss of SLK expression. To uncover the signaling cascades underlying the action of SLK, we combined phosphoproteomics, protein interaction screens and single cell RNA seq. Overall, our data identifies SLK as a key regulator of both dendritic complexity during development and of inhibitory synapse maintenance.

SeminarNeuroscience

A recurrent network model of planning explains hippocampal replay and human behavior

Guillaume Hennequin
University of Cambridge, UK
May 30, 2023

When interacting with complex environments, humans can rapidly adapt their behavior to changes in task or context. To facilitate this adaptation, we often spend substantial periods of time contemplating possible futures before acting. For such planning to be rational, the benefits of planning to future behavior must at least compensate for the time spent thinking. Here we capture these features of human behavior by developing a neural network model where not only actions, but also planning, are controlled by prefrontal cortex. This model consists of a meta-reinforcement learning agent augmented with the ability to plan by sampling imagined action sequences drawn from its own policy, which we refer to as 'rollouts'. Our results demonstrate that this agent learns to plan when planning is beneficial, explaining the empirical variability in human thinking times. Additionally, the patterns of policy rollouts employed by the artificial agent closely resemble patterns of rodent hippocampal replays recently recorded in a spatial navigation task, in terms of both their spatial statistics and their relationship to subsequent behavior. Our work provides a new theory of how the brain could implement planning through prefrontal-hippocampal interactions, where hippocampal replays are triggered by - and in turn adaptively affect - prefrontal dynamics.

SeminarNeuroscience

The Geometry of Decision-Making

Iain Couzin
University of Konstanz, Germany
May 23, 2023

Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.

SeminarNeuroscience

The role of sub-population structure in computations through neural dynamics

Srdjan Ostojic
École normale supérieure
May 18, 2023

Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.

SeminarNeuroscience

Quasicriticality and the quest for a framework of neuronal dynamics

Leandro Jonathan Fosque
Beggs lab, IU Bloomington
May 2, 2023

Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.

SeminarNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale
Princeton
May 2, 2023

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

SeminarNeuroscience

Epigenomic (re)programming of the brain and behavior by ovarian hormones

Marija Kundakovic
Fordham University
May 1, 2023

Rhythmic changes in sex hormone levels across the ovarian cycle exert powerful effects on the brain and behavior, and confer female-specific risks for neuropsychiatric conditions. In this talk, Dr. Kundakovic will discuss the role of fluctuating ovarian hormones as a critical biological factor contributing to the increased depression and anxiety risk in women. Cycling ovarian hormones drive brain and behavioral plasticity in both humans and rodents, and the talk will focus on animal studies in Dr. Kundakovic’s lab that are revealing the molecular and receptor mechanisms that underlie this female-specific brain dynamic. She will highlight the lab’s discovery of sex hormone-driven epigenetic mechanisms, namely chromatin accessibility and 3D genome changes, that dynamically regulate neuronal gene expression and brain plasticity but may also prime the (epi)genome for psychopathology. She will then describe functional studies, including hormone replacement experiments and the overexpression of an estrous cycle stage-dependent transcription factor, which provide the causal link(s) between hormone-driven chromatin dynamics and sex-specific anxiety behavior. Dr. Kundakovic will also highlight an unconventional role that chromatin dynamics may have in regulating neuronal function across the ovarian cycle, including in sex hormone-driven X chromosome plasticity and hormonally-induced epigenetic priming. In summary, these studies provide a molecular framework to understand ovarian hormone-driven brain plasticity and increased female risk for anxiety and depression, opening new avenues for sex- and gender-informed treatments for brain disorders.

SeminarNeuroscienceRecording

Estimating repetitive spatiotemporal patterns from resting-state brain activity data

Yusuke Takeda
Computational Brain Dynamics Team, RIKEN Center for Advanced Intelligence Project, Japan; Department of Computational Brain Imaging, ATR Neural Information Analysis Laboratories, Japan
Apr 27, 2023

Repetitive spatiotemporal patterns in resting-state brain activities have been widely observed in various species and regions, such as rat and cat visual cortices. Since they resemble the preceding brain activities during tasks, they are assumed to reflect past experiences embedded in neuronal circuits. Moreover, spatiotemporal patterns involving whole-brain activities may also reflect a process that integrates information distributed over the entire brain, such as motor and visual information. Therefore, revealing such patterns may elucidate how the information is integrated to generate consciousness. In this talk, I will introduce our proposed method to estimate repetitive spatiotemporal patterns from resting-state brain activity data and show the spatiotemporal patterns estimated from human resting-state magnetoencephalography (MEG) and electroencephalography (EEG) data. Our analyses suggest that the patterns involved whole-brain propagating activities that reflected a process to integrate the information distributed over frequencies and networks. I will also introduce our current attempt to reveal signal flows and their roles in the spatiotemporal patterns using a big dataset. - Takeda et al., Estimating repetitive spatiotemporal patterns from resting-state brain activity data. NeuroImage (2016); 133:251-65. - Takeda et al., Whole-brain propagating patterns in human resting-state brain activities. NeuroImage (2021); 245:118711.

SeminarNeuroscienceRecording

My evolution in invasive human neurophysiology: From basal ganglia single units to chronic electrocorticography; Therapies orchestrated by patients' own rhythms

Philip A. Starr, MD, PhD & Prof. Hayriye Cagnan, PhD
University of California, San Francisco, USA / University of Oxford, UK
Apr 26, 2023

On Thursday, April 27th, we will host Hayriye Cagnan and Philip A. Starr. Hayriye Cagnan, PhD, is an associate professor at the MRC Brain Network Dynamics Unit and University of Oxford. She will tell us about “Therapies orchestrated by patients’ own rhythms”. Philip A. Starr, MD, PhD, is a neurosurgeon and professor of Neurological Surgery at the University of California San Francisco. Besides his scientific presentation on “My evolution in invasive human neurophysiology: from basal ganglia single units to chronic electrocorticography”, he will give us a glimpse at the person behind the science. The talks will be followed by a shared discussion. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!

SeminarNeuroscience

The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures

Andrew Saxe
UCL
Apr 13, 2023

What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.

ePoster

How connection probability shapes fluctuations of neural population dynamics

Nils Greven, Jonas Ranft, Tilo Schwalger

Bernstein Conference 2024

ePoster

Controlled sampling of non-equilibrium brain dynamics: modeling and estimation from neuroimaging signals

Matthieu Gilson

Bernstein Conference 2024

ePoster

Is the cortical dynamics ergodic? A numerical study in partially-symmetric networks of spiking neurons

Ferdinand Tixidre, Gianluigi Mongillo, Alessandro Torcini

Bernstein Conference 2024

ePoster

Cortex-wide high density ECoG recordings from rat reveal diverse generators of sleep-spindles with characteristic anatomical topographies and non-stationary subcycle dynamics

Arash Shahidi, Ramon Garcia-Cortadella, Gerrit Schwesig, Anna Umurzakova, Mudra Deshpande, Ekaterina Sonia, Anton Sirota

Bernstein Conference 2024

ePoster

Decision making: describing the dynamics of working memory

Alejandro Sospedra, Santiago Canals, Encarni Marcos

Bernstein Conference 2024

ePoster

Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron

Christian Schmid, James Murray

Bernstein Conference 2024

ePoster

Distributed dynamics and cognition in the multiregional neocortex

Xiao-Jing Wang

Bernstein Conference 2024

ePoster

Dynamics of specialization in neural modules under resource constraints

Gabriel Béna, Dan Goodman

Bernstein Conference 2024

ePoster

Effects of global inhibition on models of neural dynamics

Antonio de Candia, Silvia Scarpetta, Ludovico Minati

Bernstein Conference 2024

ePoster

Enhanced simulations of whole-brain dynamics using hybrid resting-state structural connectomes

Thanos Manos, Sandra Diaz-Pier, Igor Fortel, Ira Driscoll, Liang Zhan, Alex Leow

Bernstein Conference 2024

ePoster

Neural Dynamics of Memory Formation in the Primate Hippocampus

Elizabeth Buffalo

Bernstein Conference 2024

ePoster

Neuronal bursting from an interplay of fast voltage and slow concentration dynamics mediated by the Na+/K+-ATPase

Mahraz Behbood, Louisiane Lemaire, Jan-Hendrik Schleimer, Susanne Schreiber

Bernstein Conference 2024

ePoster

Gradient and network~structure of lagged correlations\\in band-limited cortical dynamics

Paul Hege, Markus Siegel

Bernstein Conference 2024

ePoster

Optimal control of oscillations and synchrony in nonlinear models of neural population dynamics

Lena Salfenmoser, Klaus Obermayer

Bernstein Conference 2024

ePoster

Homeostatic regulation through aggregate synaptic dynamics at multiple timescales

Petros Vlachos, Jochen Triesch

Bernstein Conference 2024

ePoster

Identifying the impact of local connectivity features on network dynamics

Yuxiu Shao, David Dahmen, Stefano Recanatesi, Eric Shea-Brow, Srdjan Ostojic

Bernstein Conference 2024

ePoster

Identifying task-specific dynamics in recurrent neural networks using Dynamical Similarity Analysis

Alireza Ghalambor, Mohammad Taha Fakharian, Roxana Zeraati, Shervin Safavi

Bernstein Conference 2024

ePoster

Influence of Collective Network Dynamics on Stimulus Separation

Lars Schutzeichel, Jan Bauer, Peter Bouss, David Dahmen, Simon Musall, Moritz Helias

Bernstein Conference 2024

ePoster

Local E/I Balance and Spontaneous Dynamics in Neuronal Networks

Shreya Agarwal, Richmond Crisostomo, Ulrich Egert, Samora Okujeni

Bernstein Conference 2024

ePoster

Modeling gait dynamics with switching non-linear dynamical systems

Heike Stein, Njiva Andrianarivelo, Clarisse Batifol, Jeremy Gabillet, Ali Jalil, Michael Graupner, N. Alex Cayco Gajic

Bernstein Conference 2024

ePoster

Multi-scale single-cycle analysis of cortex-wide wave dynamics reveals complex spatio-temporal structure

Anna Umurzakova, Ramon Garcia-Cortadella, Gerrit Schweisig, Arash Shahidi, Jose Garrido, Anton Sirota

Bernstein Conference 2024

ePoster

Population Dynamics and Network Behaviour of ON- and OFF-cells in the Rostral Ventral Medulla

Carl Ashworth, Caitlynn De Preter, Melissa Martenson, Zhigang Shi, Mary Heinricher, Flavia Mancini

Bernstein Conference 2024

ePoster

Preference dynamics in economic decision-making explained by dopaminergic distributional codes

Mehrdad Salmasi, Raymond Dolan

Bernstein Conference 2024

ePoster

Presynaptic Activity-dependent calcium dynamics in cytosol & ER, and a brief proposal for a morphodynamic model of growth cone motility

Nicole Flores-Pretell, Ranjita Dutta Roy, Daniel Gonzalez-Esparza, Dmitry Logashenko, Markus Breit, Markus Knodel, Gabriel Wittum

Bernstein Conference 2024

ePoster

Quantifying the learning dynamics of single subjects in a reversal learning task with change point analysis

Nicolas Diekmann, Metin Uengoer, Sen Cheng

Bernstein Conference 2024

ePoster

Replay of Chaotic Dynamics through Differential Hebbian Learning with Transmission Delays

Georg Reich, Pau Vilimelis Aceituno, Matthew Cook

Bernstein Conference 2024

ePoster

Short-Distance Connections Enhance Neural Network Dynamics

Mohmmad Sharif Hussainyar, Dong Li, Claus Hilgetag

Bernstein Conference 2024

ePoster

Slow Manifold Dynamics for Working Memory are near Continuous Attractors

Ábel Ságodi, Guillermo Martin, Piotr Sokół, Il Park

Bernstein Conference 2024

ePoster

Synaptic Upscaling Amplifies Chaotic Dynamics in Recurrent Networks of Rate Neurons

Farhad Razi, Fleur Zeldenrust

Bernstein Conference 2024

ePoster

Unifying fast and slow temporal dynamics of AMPARs during Long-Term Potentiation

Surbhit Wagle, Nataliya Kraynyukova, Maximilian Kracht, Anne-Sophie Hafner, Amparo Acker-Palmer, Erin Schuman, Tatjana Tchumatchenko

Bernstein Conference 2024

ePoster

Capturing the evolution of low-dimensional dynamics in large scale neural recordings with sliceTCA

COSYNE 2022

ePoster

Cerebellum learns to drive cortical dynamics: a computational lesson

COSYNE 2022

ePoster

Comparable theta phase coding dynamics along the CA1 transverse axis

COSYNE 2022

ePoster

Coordinated cortico-cerebellar neural dynamics underlying neuroprosthetic learning

COSYNE 2022

ePoster

Disentangling neural dynamics with fluctuating hidden Markov models

COSYNE 2022

ePoster

Distinct dynamics in projection-specific midbrain dopamine populations for learning and motivation

COSYNE 2022

ePoster

Dynamics of interhemispheric prefrontal coordination underlying serial dependence in working memory

COSYNE 2022

ePoster

Efficient learning of low dimensional latent dynamics in multiscale spiking and LFP population activity

COSYNE 2022

ePoster

Emergent behavior and neural dynamics in artificial agents tracking turbulent plumes

COSYNE 2022

ePoster

Effective excitability: a determinant of the network bursting dynamics revealed by parameter invariance

Oleg Vinogradov, Emmanouil Giannakakis, Betül Uysal, Shlomo Ron, Eyal Weinreb, Holger Lerche, Elisha Moses, Anna Levina

Bernstein Conference 2024