← Back

Flexibility

Topic spotlight
TopicWorld Wide

flexibility

Discover seminars, jobs, and research tagged with flexibility across World Wide.
89 curated items54 Seminars35 ePosters
Updated 3 months ago
89 items · flexibility
89 results
SeminarNeuroscience

Brain-Wide Compositionality and Learning Dynamics in Biological Agents

Kanaka Rajan
Harvard Medical School
Nov 12, 2024

Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.

SeminarNeuroscienceRecording

Principles of Cognitive Control over Task Focus and Task

Tobias Egner
Duke University, USA
Sep 10, 2024

2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).

SeminarPsychology

Where Cognitive Neuroscience Meets Industry: Navigating the Intersections of Academia and Industry

Mirta Stantic
Royal Holloway, University of London
Feb 18, 2024

In this talk, Mirta will share her journey from her education a mathematically-focused high school to her currently unconventional career in London, emphasizing the evolution from a local education in Croatia to international experiences in the US and UK. We will explore the concept of interdisciplinary careers in the modern world, viewing them through the framework of increasing demand, flexibility, and dynamism in the current workplace. We will underscore the significance of interdisciplinary research for launching careers outside of academia, and bolstering those within. I will challenge the conventional norm of working either in academia or industry, and encourage discussion about the opportunities for combining the two in a myriad of career opportunities. I’ll use examples from my own and others’ research to highlight opportunities for early career researchers to extend their work into practical applications. Such an approach leverages the strengths of both sectors, fostering innovation and practical applications of research findings. I hope these insights can offer valuable perspectives for those looking to navigate the evolving demands of the global job market, illustrating the advantages of a versatile skill set that spans multiple disciplines and allows extensions into exciting career options.

SeminarNeuroscience

Visual mechanisms for flexible behavior

Marlene Cohen
University of Chicago
Jan 25, 2024

Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.

SeminarNeuroscience

Movement planning as a window into hierarchical motor control

Katja Kornysheva
Centre for Human Brain (CHBH) at the University of Birmingham, UK
Jun 14, 2023

The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.

SeminarCognition

Prosody in the voice, face, and hands changes which words you hear

Hans Rutger Bosker
Donders Institute of Radboud University
May 22, 2023

Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.

SeminarCognition

Beyond Volition

Patrick Haggard
University College London
Apr 26, 2023

Voluntary actions are actions that agents choose to make. Volition is the set of cognitive processes that implement such choice and initiation. These processes are often held essential to modern societies, because they form the cognitive underpinning for concepts of individual autonomy and individual responsibility. Nevertheless, psychology and neuroscience have struggled to define volition, and have also struggled to study it scientifically. Laboratory experiments on volition, such as those of Libet, have been criticised, often rather naively, as focussing exclusively on meaningless actions, and ignoring the factors that make voluntary action important in the wider world. In this talk, I will first review these criticisms, and then look at extending scientific approaches to volition in three directions that may enrich scientific understanding of volition. First, volition becomes particularly important when the range of possible actions is large and unconstrained - yet most experimental paradigms involve minimal response spaces. We have developed a novel paradigm for eliciting de novo actions through verbal fluency, and used this to estimate the elusive conscious experience of generativity. Second, volition can be viewed as a mechanism for flexibility, by promoting adaptation of behavioural biases. This view departs from the tradition of defining volition by contrasting internally-generated actions with externally-triggered actions, and instead links volition to model-based reinforcement learning. By using the context of competitive games to re-operationalise the classic Libet experiment, we identified a form of adaptive autonomy that allows agents to reduce biases in their action choices. Interestingly, this mechanism seems not to require explicit understanding and strategic use of action selection rules, in contrast to classical ideas about the relation between volition and conscious, rational thought. Third, I will consider volition teleologically, as a mechanism for achieving counterfactual goals through complex problem-solving. This perspective gives a key role in mediating between understanding and planning on the one hand, and instrumental action on the other hand. Taken together, these three cognitive phenomena of generativity, flexibility, and teleology may partly explain why volition is such an important cognitive function for organisation of human behaviour and human flourishing. I will end by discussing how this enriched view of volition can relate to individual autonomy and responsibility.

SeminarNeuroscience

Establishment and aging of the neuronal DNA methylation landscape in the hippocampus

Sara Zocher, PhD
German Center for Neurodegenerative Diseases (DZNE), Dresden
Apr 11, 2023

The hippocampus is a brain region with key roles in memory formation, cognitive flexibility and emotional control. Yet hippocampal function is impaired severely during aging and in neurodegenerative diseases, and impairments in hippocampal function underlie age-related cognitive decline. Accumulating evidence suggests that the deterioration of the neuron-specific epigenetic landscape during aging contributes to their progressive, age-related dysfunction. For instance, we have recently shown that aging is associated with pronounced alterations of neuronal DNA methylation patterns in the hippocampus. Because neurons are generated mostly during development with limited replacement in the adult brain, they are particularly long-lived cells and have to maintain their cell-type specific gene expression programs life-long in order to preserve brain function. Understanding the epigenetic mechanisms that underlie the establishment and long-term maintenance of neuron-specific gene expression programs, will help us to comprehend the sources and consequences of their age-related deterioration. In this talk, I will present our recent work that investigated the role of DNA methylation in the establishment of neuronal gene expression programs and neuronal function, using adult neurogenesis in the hippocampus as a model. I will then describe the effects of aging on the DNA methylation landscape in the hippocampus and discuss the malleability of the aging neuronal methylome to lifestyle and environmental stimulation.

SeminarNeuroscience

Relations and Predictions in Brains and Machines

Kim Stachenfeld
Deepmind
Apr 6, 2023

Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.

SeminarNeuroscienceRecording

Developmentally structured coactivity in the hippocampal trisynaptic loop

Roman Huszár
Buzsáki Lab, New York University
Apr 4, 2023

The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.

SeminarNeuroscience

Ambient noise reveals rapid flexibility in marmoset vocal behavior

Julia Löschner
Mar 9, 2023
SeminarNeuroscience

Age differences in cortical network flexibility and motor learning ability

Kazumasa Uehara
Mar 9, 2023
SeminarNeuroscienceRecording

Prefrontal top-down projections control context-dependent strategy selection

Olivier Gschwend
Medidee Services SA, (former postdoc at Cold Spring Harbor Laboratory)
Dec 6, 2022

The rules governing behavior often vary with behavioral contexts. As a result, an action rewarded in one context may be discouraged in another. Animals and humans are capable of switching between behavioral strategies under different contexts and acting adaptively according to the variable rules, a flexibility that is thought to be mediated by the prefrontal cortex (PFC). However, how the PFC orchestrates the context-dependent switch of strategies remains unclear. Here we show that pathway-specific projection neurons in the medial PFC (mPFC) differentially contribute to context-instructed strategy selection. In mice trained in a decision-making task in which a previously established rule and a newly learned rule are associated with distinct contexts, the activity of mPFC neurons projecting to the dorsomedial striatum (mPFC-DMS) encodes the contexts and further represents decision strategies conforming to the old and new rules. Moreover, mPFC-DMS neuron activity is required for the context-instructed strategy selection. In contrast, the activity of mPFC neurons projecting to the ventral midline thalamus (mPFC-VMT) does not discriminate between the contexts, and represents the old rule even if mice have adopted the new one. Furthermore, these neurons act to prevent the strategy switch under the new rule. Our results suggest that mPFC-DMS neurons promote flexible strategy selection guided by contexts, whereas mPFC-VMT neurons favor fixed strategy selection by preserving old rules.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscience

Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks

Denis Alevi
Berlin Institute of Technology (
Nov 2, 2022

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.

SeminarNeuroscience

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

Laura Driscoll
Stanford University
Aug 24, 2022

Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.

SeminarNeuroscience

From Computation to Large-scale Neural Circuitry in Human Belief Updating

Tobias Donner
University Medical Center Hamburg-Eppendorf
Jun 28, 2022

Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.

SeminarNeuroscience

Chemistry of the adaptive mind: lessons from dopamine

Roshan Cools, PhD
Donders Institute for Brain, Cognition and Behaviour, Radboudumc, Department of ...
Jun 13, 2022

The human brain faces a variety of computational dilemmas, including the flexibility/stability, the speed/accuracy and the labor/leisure tradeoff. I will argue that striatal dopamine is particularly well suited to dynamically regulate these computational tradeoffs depending on constantly changing task demands. This working hypothesis is grounded in evidence from recent studies on learning, motivation and cognitive control in human volunteers, using chemical PET, psychopharmacology, and/or fMRI. These studies also begin to elucidate the mechanisms underlying the huge variability in catecholaminergic drug effects across different individuals and across different task contexts. For example, I will demonstrate how effects of the most commonly used psychostimulant methylphenidate on learning, Pavlovian and effortful instrumental control depend on fluctuations in current environmental volatility, on individual differences in working memory capacity and on opportunity cost respectively.

SeminarNeuroscience

Unchanging and changing: hardwired taste circuits and their top-down control

Hao Jin
Columbia
May 24, 2022

The taste system detects 5 major categories of ethologically relevant stimuli (sweet, bitter, umami, sour and salt) and accordingly elicits acceptance or avoidance responses. While these taste responses are innate, the taste system retains a remarkable flexibility in response to changing external and internal contexts. Taste chemicals are first recognized by dedicated taste receptor cells (TRCs) and then transmitted to the cortex via a multi-station relay. I reasoned that if I could identify taste neural substrates along this pathway, it would provide an entry to decipher how taste signals are encoded to drive innate response and modulated to facilitate adaptive response. Given the innate nature of taste responses, these neural substrates should be genetically identifiable. I therefore exploited single-cell RNA sequencing to isolate molecular markers defining taste qualities in the taste ganglion and the nucleus of the solitary tract (NST) in the brainstem, the two stations transmitting taste signals from TRCs to the brain. How taste information propagates from the ganglion to the brain is highly debated (i.e., does taste information travel in labeled-lines?). Leveraging these genetic handles, I demonstrated one-to-one correspondence between ganglion and NST neurons coding for the same taste. Importantly, inactivating one ‘line’ did not affect responses to any other taste stimuli. These results clearly showed that taste information is transmitted to the brain via labeled lines. But are these labeled lines aptly adapted to the internal state and external environment? I studied the modulation of taste signals by conflicting taste qualities in the concurrence of sweet and bitter to understand how adaptive taste responses emerge from hardwired taste circuits. Using functional imaging, anatomical tracing and circuit mapping, I found that bitter signals suppress sweet signals in the NST via top-down modulation by taste cortex and amygdala of NST taste signals. While the bitter cortical field provides direct feedback onto the NST to amplify incoming bitter signals, it exerts negative feedback via amygdala onto the incoming sweet signal in the NST. By manipulating this feedback circuit, I showed that this top-down control is functionally required for bitter evoked suppression of sweet taste. These results illustrate how the taste system uses dedicated feedback lines to finely regulate innate behavioral responses and may have implications for the context-dependent modulation of hardwired circuits in general.

SeminarNeuroscienceRecording

The neural basis of flexible semantic cognition (BACN Mid-career Prize Lecture 2022)

Elizabeth Jefferies
Department of Psychology, University of York, UK
May 24, 2022

Semantic cognition brings meaning to our world – it allows us to make sense of what we see and hear, and to produce adaptive thoughts and behaviour. Since we have a wealth of information about any given concept, our store of knowledge is not sufficient for successful semantic cognition; we also need mechanisms that can steer the information that we retrieve so it suits the context or our current goals. This talk traces the neural networks that underpin this flexibility in semantic cognition. It draws on evidence from multiple methods (neuropsychology, neuroimaging, neural stimulation) to show that two interacting heteromodal networks underpin different aspects of flexibility. Regions including anterior temporal cortex and left angular gyrus respond more strongly when semantic retrieval follows highly-related concepts or multiple convergent cues; the multivariate responses in these regions correspond to context-dependent aspects of meaning. A second network centred on left inferior frontal gyrus and left posterior middle temporal gyrus is associated with controlled semantic retrieval, responding more strongly when weak associations are required or there is more competition between concepts. This semantic control network is linked to creativity and also captures context-dependent aspects of meaning; however, this network specifically shows more similar multivariate responses across trials when association strength is weak, reflecting a common controlled retrieval state when more unusual associations are the focus. Evidence from neuropsychology, fMRI and TMS suggests that this semantic control network is distinct from multiple-demand cortex which supports executive control across domains, although challenging semantic tasks recruit both networks. The semantic control network is juxtaposed between regions of default mode network that might be sufficient for the retrieval of strong semantic relationships and multiple-demand regions in the left hemisphere, suggesting that the large-scale organisation of flexible semantic cognition can be understood in terms of cortical gradients that capture systematic functional transitions that are repeated in temporal, parietal and frontal cortex.

SeminarNeuroscience

How does a neuron decide when and where to make a synapse?

Peter R. Hiesinger
Free University, Berlin, Germany
Feb 15, 2022

Precise synaptic connectivity is a prerequisite for the function of neural circuits, yet individual neurons, taken out of their developmental context, readily form unspecific synapses. How does genetically encoded brain wiring deal with this apparent contradiction? Brain wiring is a developmental growth process that is not only characterized by precision, but also flexibility and robustness. As in any other growth process, cellular interactions are restricted in space and time. Correspondingly, molecular and cellular interactions are restricted to those that 'get to see' each other during development. This seminar will explore the question how neurons decide when and where to make synapses using the Drosophila visual system as a model. New findings reveal that pattern formation during growth and the kinetics of live neuronal interactions restrict synapse formation and partner choice for neurons that are not otherwise prevented from making incorrect synapses in this system. For example, cell biological mechanisms like autophagy as well as developmental temperature restrict inappropriate partner choice through a process of kinetic exclusion that critically contributes to wiring specificity. The seminar will explore these and other neuronal strategies when and where to make synapses during developmental growth that contribute to precise, flexible and robust outcomes in brain wiring.

SeminarNeuroscienceRecording

New Mechanisms of Extracellular Matrix Remodeling

Silvio Rizzoli
University of Goettingen School of Medicine
Jan 30, 2022

In the adult brain, synapses are tightly enwrapped by lattices of extracellular matrix that consist of extremely long-lived molecules. These lattices are deemed to stabilize synapses, restrict the reorganization of their transmission machinery, and prevent them from undergoing structural or morphological changes. At the same time, they are expected to retain some degree of flexibility to permit occasional events of synaptic plasticity. The recent understanding that structural changes to synapses are significantly more frequent than previously assumed (occurring even on a timescale of minutes) has called for a mechanism that allows continual and energy-efficient remodeling of the ECM at synapses. I review in the talk our recent work showcasing such a process, based on the constitutive recycling of synaptic ECM molecules. I discuss the key characteristics of this mechanism, focusing on its roles in mediating synaptic transmission and plasticity, and speculate on additional potential functions in neuronal signaling.

SeminarNeuroscienceRecording

NMC4 Keynote: An all-natural deep recurrent neural network architecture for flexible navigation

Vivek Jayaraman
Janelia Research Campus
Nov 30, 2021

A wide variety of animals and some artificial agents can adapt their behavior to changing cues, contexts, and goals. But what neural network architectures support such behavioral flexibility? Agents with loosely structured network architectures and random connections can be trained over millions of trials to display flexibility in specific tasks, but many animals must adapt and learn with much less experience just to survive. Further, it has been challenging to understand how the structure of trained deep neural networks relates to their functional properties, an important objective for neuroscience. In my talk, I will use a combination of behavioral, physiological and connectomic evidence from the fly to make the case that the built-in modularity and structure of its networks incorporate key aspects of the animal’s ecological niche, enabling rapid flexibility by constraining learning to operate on a restricted parameter set. It is not unlikely that this is also a feature of many biological neural networks across other animals, large and small, and with and without vertebrae.

SeminarNeuroscienceRecording

Deep kernel methods

Laurence Aitchison
University of Bristol
Nov 24, 2021

Deep neural networks (DNNs) with the flexibility to learn good top-layer representations have eclipsed shallow kernel methods without that flexibility. Here, we take inspiration from deep neural networks to develop a new family of deep kernel method. In a deep kernel method, there is a kernel at every layer, and the kernels are jointly optimized to improve performance (with strong regularisation). We establish the representational power of deep kernel methods, by showing that they perform exact inference in an infinitely wide Bayesian neural network or deep Gaussian process. Next, we conjecture that the deep kernel machine objective is unimodal, and give a proof of unimodality for linear kernels. Finally, we exploit the simplicity of the deep kernel machine loss to develop a new family of optimizers, based on a matrix equation from control theory, that converges in around 10 steps.

SeminarNeuroscienceRecording

Norse: A library for gradient-based learning in Spiking Neural Networks

Jens Egholm Pedersen
KTH Royal Institute of Technology
Nov 2, 2021

We introduce Norse: An open-source library for gradient-based training of spiking neural networks. In contrast to neuron simulators which mainly target computational neuroscientists, our library seamlessly integrates with the existing PyTorch ecosystem using abstractions familiar to the machine learning community. This has immediate benefits in that it provides a familiar interface, hardware accelerator support and, most importantly, the ability to use gradient-based optimization. While many parallel efforts in this direction exist, Norse emphasizes flexibility and usability in three ways. Users can conveniently specify feed-forward (convolutional) architectures, as well as arbitrarily connected recurrent networks. We strictly adhere to a functional and class-based API such that neuron primitives and, for example, plasticity rules composes. Finally, the functional core API ensures compatibility with the PyTorch JIT and ONNX infrastructure. We have made progress to support network execution on the SpiNNaker platform and plan to support other neuromorphic architectures in the future. While the library is useful in its present state, it also has limitations we will address in ongoing work. In particular, we aim to implement event-based gradient computation, using the EventProp algorithm, which will allow us to support sparse event-based data efficiently, as well as work towards support of more complex neuron models. With this library, we hope to contribute to a joint future of computational neuroscience and neuromorphic computing.

SeminarPsychologyRecording

Removing information from working memory

Jarrod Lewis-Peacock
University of Texas at Austin
Sep 23, 2021

Holding information in working memory is essential for cognition, but removing unwanted thoughts is equally important. There is great flexibility in how we can manipulate information in working memory, but the processes and consequences of these operations are poorly understood. In this talk I will discuss our recent findings using multivariate pattern analyses of fMRI brain data to demonstrate the successful removal of information from working memory using three different strategies: suppressing a specific thought, replacing a thought with a different one, and clearing the mind of all thought. These strategies are supported by distinct brain regions and have differential consequences on the encoding of new information. I will discuss implications of these results on theories of memory and I will highlight some new directions involving the use of real-time neurofeedback to investigate causal links between brain and behavior.

SeminarOpen SourceRecording

SimBA for Behavioral Neuroscientists

Sam A. Golden
University of Washington, Department of Biological Structure
Jul 15, 2021

Several excellent computational frameworks exist that enable high-throughput and consistent tracking of freely moving unmarked animals. SimBA introduce and distribute a plug-and play pipeline that enables users to use these pose-estimation approaches in combination with behavioral annotation for the generation of supervised machine-learning behavioral predictive classifiers. SimBA was developed for the analysis of complex social behaviors, but includes the flexibility for users to generate predictive classifiers across other behavioral modalities with minimal effort and no specialized computational background. SimBA has a variety of extended functions for large scale batch video pre-processing, generating descriptive statistics from movement features, and interactive modules for user-defined regions of interest and visualizing classification probabilities and movement patterns.

SeminarNeuroscienceRecording

Zero-shot visual reasoning with probabilistic analogical mapping

Taylor Webb
UCLA
Jun 30, 2021

There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.

SeminarNeuroscience

Psychological mechanisms and functions of 5-HT and SSRIs in potential therapeutic change: Lessons from the serotonergic modulation of action selection, learning, affect, and social cognition

Clark Roberts
University of Cambridge, Department of Psychology
May 25, 2021

Uncertainty regarding which psychological mechanisms are fundamental in mediating SSRI treatment outcomes and wide-ranging variability in their efficacy has raised more questions than it has solved. Since subjective mood states are an abstract scientific construct, only available through self-report in humans, and likely involving input from multiple top-down and bottom-up signals, it has been difficult to model at what level SSRIs interact with this process. Converging translational evidence indicates a role for serotonin in modulating context-dependent parameters of action selection, affect, and social cognition; and concurrently supporting learning mechanisms, which promote adaptability and behavioural flexibility. We examine the theoretical basis, ecological validity, and interaction of these constructs and how they may or may not exert a clinical benefit. Specifically, we bridge crucial gaps between disparate lines of research, particularly findings from animal models and human clinical trials, which often seem to present irreconcilable differences. In determining how SSRIs exert their effects, our approach examines the endogenous functions of 5-HT neurons, how 5-HT manipulations affect behaviour in different contexts, and how their therapeutic effects may be exerted in humans – which may illuminate issues of translational models, hierarchical mechanisms, idiographic variables, and social cognition.

SeminarNeuroscience

State-dependent cortical circuits

Jess Cardin
Yale School of Medicine
May 13, 2021

Spontaneous and sensory-evoked cortical activity is highly state-dependent, promoting the functional flexibility of cortical circuits underlying perception and cognition. Using neural recordings in combination with behavioral state monitoring, we find that arousal and motor activity have complementary roles in regulating local cortical operations, providing dynamic control of sensory encoding. These changes in encoding are linked to altered performance on perceptual tasks. Neuromodulators, such as acetylcholine, may regulate this state-dependent flexibility of cortical network function. We therefore recently developed an approach for dual mesoscopic imaging of acetylcholine release and neural activity across the entire cortical mantle in behaving mice. We find spatiotemporally heterogeneous patterns of cholinergic signaling across the cortex. Transitions between distinct behavioral states reorganize the structure of large-scale cortico-cortical networks and differentially regulate the relationship between cholinergic signals and neural activity. Together, our findings suggest dynamic state-dependent regulation of cortical network operations at the levels of both local and large-scale circuits. Zoom Meeting ID: 964 8138 3003 Contact host if you cannot connect.

SeminarNeuroscienceRecording

Prefrontal circuits underlying cognitive flexibility

Timothy Spellman
Weill Cornell Medical College
May 4, 2021
SeminarNeuroscience

State-dependent cortical circuits

Jessica Cardin
Yale School of Medicine
Jan 17, 2021

Spontaneous and sensory-evoked cortical activity is highly state-dependent, promoting the functional flexibility of cortical circuits underlying perception and cognition. Using neural recordings in combination with behavioral state monitoring, we find that arousal and motor activity have complementary roles in regulating local cortical operations, providing dynamic control of sensory encoding. These changes in encoding are linked to altered performance on perceptual tasks. Neuromodulators, such as acetylcholine, may regulate this state-dependent flexibility of cortical network function. We therefore recently developed an approach for dual mesoscopic imaging of acetylcholine release and neural activity across the entire cortical mantle in behaving mice. We find spatiotemporally heterogeneous patterns of cholinergic signaling across the cortex. Transitions between distinct behavioral states reorganize the structure of large-scale cortico-cortical networks and differentially regulate the relationship between cholinergic signals and neural activity. Together, our findings suggest dynamic state-dependent regulation of cortical network operations at the levels of both local and large-scale circuits.

SeminarNeuroscienceRecording

State-dependent regulation of cortical circuits

Jessica Cardin
Yale School of Medicine
Nov 10, 2020

Spontaneous and sensory-evoked cortical activity is highly state-dependent, promoting the functional flexibility of cortical circuits underlying perception and cognition. Using neural recordings in combination with behavioral state monitoring, we find that arousal and motor activity have complementary roles in regulating local cortical operations, providing dynamic control of sensory encoding. These changes in encoding are linked to altered performance on perceptual tasks. Neuromodulators, such as acetylcholine, may regulate this state-dependent flexibility of cortical network function. We therefore recently developed an approach for dual mesoscopic imaging of acetylcholine release and neural activity across the entire cortical mantle in behaving mice. We find spatiotemporally heterogeneous patterns of cholinergic signaling across the cortex. Transitions between distinct behavioral states reorganize the structure of large-scale cortico-cortical networks and differentially regulate the relationship between cholinergic signals and neural activity. Together, our findings suggest dynamic state-dependent regulation of cortical network operations at the levels of both local and large-scale circuits.

SeminarPhysics of LifeRecording

Building a synthetic cell: Understanding the clock design and function

Qiong Yang
U Michigan - Ann Arbor
Oct 19, 2020

Clock networks containing the same central architectures may vary drastically in their potential to oscillate, raising the question of what controls robustness, one of the essential functions of an oscillator. We computationally generate an atlas of oscillators and found that, while core topologies are critical for oscillations, local structures substantially modulate the degree of robustness. Strikingly, two local structures, incoherent and coherent inputs, can modify a core topology to promote and attenuate its robustness, additively. The findings underscore the importance of local modifications to the performance of the whole network. It may explain why auxiliary structures not required for oscillations are evolutionary conserved. We also extend this computational framework to search hidden network motifs for other clock functions, such as tunability that relates to the capabilities of a clock to adjust timing to external cues. Experimentally, we developed an artificial cell system in water-in-oil microemulsions, within which we reconstitute mitotic cell cycles that can perform self-sustained oscillations for 30 to 40 cycles over multiple days. The oscillation profiles, such as period, amplitude, and shape, can be quantitatively varied with the concentrations of clock regulators, energy levels, droplet sizes, and circuit design. Such innate flexibility makes it crucial to studying clock functions of tunability and stochasticity at the single-cell level. Combined with a pressure-driven multi-channel tuning setup and long-term time-lapse fluorescence microscopy, this system enables a high-throughput exploration in multi-dimension continuous parameter space and single-cell analysis of the clock dynamics and functions. We integrate this experimental platform with mathematical modeling to elucidate the topology-function relation of biological clocks. With FRET and optogenetics, we also investigate spatiotemporal cell-cycle dynamics in both homogeneous and heterogeneous microenvironments by reconstructing subcellular compartments.

SeminarNeuroscienceRecording

On temporal coding in spiking neural networks with alpha synaptic function

Iulia M. Comsa
Google Research Zürich, Switzerland
Aug 30, 2020

The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.

SeminarNeuroscienceRecording

Mean-field models for finite-size populations of spiking neurons

Tilo Schwalger
TU Berlin
Jun 7, 2020

Firing-rate (FR) or neural-mass models are widely used for studying computations performed by neural populations. Despite their success, classical firing-rate models do not capture spike timing effects on the microscopic level such as spike synchronization and are difficult to link to spiking data in experimental recordings. For large neuronal populations, the gap between the spiking neuron dynamics on the microscopic level and coarse-grained FR models on the population level can be bridged by mean-field theory formally valid for infinitely many neurons. It remains however challenging to extend the resulting mean-field models to finite-size populations with biologically realistic neuron numbers per cell type (mesoscopic scale). In this talk, I present a mathematical framework for mesoscopic populations of generalized integrate-and-fire neuron models that accounts for fluctuations caused by the finite number of neurons. To this end, I will introduce the refractory density method for quasi-renewal processes and show how this method can be generalized to finite-size populations. To demonstrate the flexibility of this approach, I will show how synaptic short-term plasticity can be incorporated in the mesoscopic mean-field framework. On the other hand, the framework permits a systematic reduction to low-dimensional FR equations using the eigenfunction method. Our modeling framework enables a re-examination of classical FR models in computational neuroscience under biophysically more realistic conditions.

SeminarNeuroscienceRecording

Neural control of vocal interactions in songbirds

Daniela Vallentin
Max Planck Institute for Ornithology
May 14, 2020

During conversations we rapidly switch between listening and speaking which often requires withholding or delaying our speech in order to hear others and avoid overlapping. This capacity for vocal turn-taking is exhibited by non-linguistic species as well, however the neural circuit mechanisms that enable us to regulate the precise timing of our vocalizations during interactions are unknown. We aim to identify the neural mechanisms underlying the coordination of vocal interactions. Therefore, we paired zebra finches with a vocal robot (1Hz call playback) and measured the bird’s call response times. We found that individual birds called with a stereotyped delay in respect to the robot call. Pharmacological inactivation of the premotor nucleus HVC revealed its necessity for the temporal coordination of calls. We further investigated the contributing neural activity within HVC by performing intracellular recordings from premotor neurons and inhibitory interneurons in calling zebra finches. We found that inhibition is preceding excitation before and during call onset. To test whether inhibition guides call timing we pharmacologically limited the impact of inhibition on premotor neurons. As a result zebra finches converged on a similar delay time i.e. birds called more rapidly after the vocal robot call suggesting that HVC inhibitory interneurons regulate the coordination of social contact calls. In addition, we aim to investigate the vocal turn-taking capabilities of the common nightingale. Male nightingales learn over 100 different song motifs which are being used in order to attract mates or defend territories. Previously, it has been shown that nightingales counter-sing with each other following a similar temporal structure to human vocal turn-taking. These animals are also able to spontaneously imitate a motif of another nightingale. The neural mechanisms underlying this behaviour are not yet understood. In my lab, we further probe the capabilities of these animals in order to access the dynamic range of their vocal turn taking flexibility.

ePoster

The cost of behavioral flexibility: a modeling study of reversal learning using a spiking neural network

Behnam Ghazinouri, Sen Cheng

Bernstein Conference 2024

ePoster

Defining the role of a locus coeruleus-orbitofrontal cortex circuit in behavioral flexibility

COSYNE 2022

ePoster

Neural network size balances representational drift and flexibility during Bayesian sampling

COSYNE 2022

ePoster

Neural network size balances representational drift and flexibility during Bayesian sampling

COSYNE 2022

ePoster

Revisiting the flexibility-stability dilemma in recurrent networks using a multiplicative plasticity rule

COSYNE 2022

ePoster

Revisiting the flexibility-stability dilemma in recurrent networks using a multiplicative plasticity rule

COSYNE 2022

ePoster

Thalamic role in human cognitive flexibility and routing of abstract information.

COSYNE 2022

ePoster

Thalamic role in human cognitive flexibility and routing of abstract information.

COSYNE 2022

ePoster

Cortical-bulbar feedback supports behavioral flexibility during rule reversal

Diego Eduardo Hernández Trejo, Andrei Ciuparu, Pedro Garcia da Silva, Cristina Velázquez, Raul Mureşan, Dinu Albeanu

COSYNE 2023

ePoster

Dynamic gating of perceptual flexibility by non-classically responsive cortical neurons

Blake Sidleck, Jack Toth, Priya Agarwal, Olivia Lombardi, Danyall Saeed, Dylan Leonard, Abraham Eldo, Badr Albanna, Michele Insanally

COSYNE 2023

ePoster

Harnessing the flexibility of neural networks to predict meaningful theoretical parameters in a multi-armed bandit task

Yoav Ger, Eliya Nachmani, Lior Wolf, Nitzan Shahar

COSYNE 2023

ePoster

Intracranial electrophysiological evidence for a novel neuro-computational mechanism of cognitive flexibility in humans

Xinyuan Yan, Seth Koneig, Becket Ebitz, Benjamin Hayden, David Darrow, Alexander Herman

COSYNE 2023

ePoster

Maturing neurons and dual structural plasticity enable flexibility and stability of olfactory memory

Bennet Sakelaris & Hermann Riecke

COSYNE 2023

ePoster

Flexibility of signaling across and within visual cortical areas V1 and V2

Aravind Krishna, Evren Gokcen, Anna Jasper, Byron Yu, Christian Machens, Adam Kohn

COSYNE 2025

ePoster

A model linking neural population activity to flexibility in sensorimotor control

Hari Teja Kalidindi, Frederic Crevecoeur

COSYNE 2025

ePoster

Modeling the flexibility of cortical control of motor units

Clay Surmeier, Elom Amematsro, Najja Marshall, Mark Churchland, Joshua Glaser

COSYNE 2025

ePoster

Multiplicative thalamocortical couplings facilitate rapid computation and cognitive flexibility

Xiaohan Zhang, Michael Halassa, Zhe Chen

COSYNE 2025

ePoster

Network Gain Regulates Stability and Flexibility in a Ring Attractor Network

Harshith Nagaraj, Mark P. Brandon

COSYNE 2025

ePoster

An activity-dependent local transport regulation via local synthesis of kinesin superfamily proteins (KIFs) underlying cognitive flexibility

Suguru Iwata, Momo Morikawa, Tetsuya Sasaki, Yosuke Takei

FENS Forum 2024

ePoster

Adolescent stress impairs behavioural flexibility in adults through population-specific alterations to ventral hippocampal circuits

Gabrielle Gregoriou, Karyna Mishchanchuk, Dhaval Joshi, Candela Sánchez-Bellot, Andrew MacAskill

FENS Forum 2024

ePoster

Attentional set-shifting task: An approach to assess prefrontal activity patterns during behavioral flexibility in aged mice

Francisca García, Pablo Fuentealba, Wael El-Deredy, Ignacio Negrón-Oyarzo

FENS Forum 2024

ePoster

Cognitive flexibility and anterior cingulate gray matter volumes correlate with serum levels of brevican in healthy humans

Anni Richter, Margarita Darna, Björn H. Schott, Constanze I. Seidenbecher

FENS Forum 2024

ePoster

Continued treatment of D-Pinitol ameliorates cognitive spatial flexibility of Alzheimer’s disease 5xFAD transgenic mice

Dina Medina-Vera, Cristina Rosell-Valle, Antonio J. López-Gambero, Juan A. Navarro, Carlos Sanjuan, Elena Baixeras, Patricia Rivera, Francisco J. Pavon, Fernando Rodríguez de Fonseca

FENS Forum 2024

ePoster

Developmental cell death of interneurons and oligodendroglia is required for cognitive flexibility in mice

Cristobal Ibaceta, Hesni Khelfaoui, Maria Cecilia Angulo

FENS Forum 2024

ePoster

Dissecting prefrontal contributions to behavioral flexibility in freely moving mice

Megan Schneck, Brice De La Crompe, Hao Zhu, Joschka Boedecker, Ilka Diester

FENS Forum 2024

ePoster

The estrogen-immune axis: A key regulator of behavioural inflexibility

Mairead Sullivan, Sarita Dam, Angela Maria Ottomana, Martina Presta, Julia van Heck, Ilse Van de Vondervoort, Simone Macrì, Jeffrey C. Glennon

FENS Forum 2024

ePoster

HBK-15, a multimodal compound, mitigates cognitive flexibility deficits in mice

Oluwatobi Adeyemo, Aleksandra Koszałka, Henryk Marona, Karolina Pytka

FENS Forum 2024

ePoster

Higher-order thalamo-motor cortex circuit supports behavioral flexibility by reinforcing decision-value

Margaux Giraudet, Elisabete Augusto, Vladimir Kouskoff, Nicolas Chenouard, Lucille Alonso, Alexy Louis, Léa Peltier, Aron de Miranda, Frédéric Gambino

FENS Forum 2024

ePoster

Impaired flexibility during social learning in NLGN3-R451C ASD model

Suin Lim, Carolyn Von-Walter, McLean Bolton

FENS Forum 2024

ePoster

Neuronal signature of cognitive flexibility in the prefrontal-dorsal raphe circuit

Claudia Espinoza, Fuhrmann Gloria, Malagon-Vina Hugo, Klausberger Thomas

FENS Forum 2024

ePoster

A novel touch-panel-based serial reversal learning task for assessing cognitive flexibility in mice

Hiroyuki Okuno, Yusuke Suzuki, Takeru Suzuki, Itaru Imayoshi, Masaki Kakeyama, Yuji Kiyama

FENS Forum 2024

ePoster

Overtraining enhances behavioural flexibility on a serial reversal learning task

Silvia Maggi, Jacco Renstrom, Rachel Grasmeder Allen, Jacob Juty, Tobias Bast

FENS Forum 2024

ePoster

Projections from the medial prefrontal cortex to the ventral midline thalamus are crucial for cognitive flexibility in rats

Elodie Panzer, Laurine Boch, Brigitte Cosquer, Anne Pereira de Vasconcelos, Aline Stéphan, Jean-Christophe Cassel

FENS Forum 2024

ePoster

A shared neural code for flexible shifts in attention, motor actions, and goal setting? The role of theta and alpha oscillations for human flexibility

Jakob Kaiser, Simone Schütz-Bosbach

FENS Forum 2024

ePoster

Training a nonstationary recurrent neuronal network for inferring neuronal dynamics during flexibility in a value-based decision-making

Cristian Estarellas Martin, Max Ingo Thurm, Dimitrios Mariatos Metaxas, Lukas Eisenmann, Hugo Malagón-Viña, Daniel Durstewitz, Thomas Klausberger

FENS Forum 2024