← Back

Flexible Computation

Topic spotlight
TopicWorld Wide

flexible computation

Discover seminars, jobs, and research tagged with flexible computation across World Wide.
5 curated items4 Seminars1 Position
Updated 2 days ago
5 items · flexible computation
5 results
PositionComputational Neuroscience

Dr. Udo Ernst

University of Bremen
University of Bremen, Bremen, Germany
Dec 5, 2025

The Computational Neurophysics lab at the University of Bremen headed by Dr. Udo Ernst offers at the earliest date possible: Postdoc / PhD student in Computational Neuroscience for 3 years. In this project we want to study organization and optimization of flexible information processing in neural networks, with specific focus on the visual system. You will use network modelling, numerical simulation, and mathematical analysis to investigate fundamental aspects of flexible computation such as task-dependent coordination of multiple brain areas for efficient information processing, as well as the emergence of flexible circuits originating from learning schemes which simultaneously optimize for function and flexibility. These studies will be complemented by biophysically realistic modelling and data analysis in collaboration with experimental work. Here we will investigate selective attention as a central aspect of flexibility in the visual system, involving task-dependent coordination of multiple visual areas.

SeminarNeuroscience

Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory

James Whittington
Stanford University / University of Oxford
Feb 13, 2024

Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM

SeminarNeuroscience

The role of sub-population structure in computations through neural dynamics

Srdjan Ostojic
École normale supérieure
May 18, 2023

Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.

SeminarNeuroscienceRecording

The role of population structure in computations through neural dynamics

Alexis Dubreuil
French National Centre for Scientific Research (CNRS), Bordeaux
Nov 1, 2022

Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.

SeminarNeuroscience

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

Laura Driscoll
Stanford University
Aug 24, 2022

Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.