Flexibility
flexibility
Scaling Up Bioimaging with Microfluidic Chips
Explore how microfluidic chips can enhance your imaging experiments by increasing control, throughput, or flexibility. In this remote, personalized workshop, participants will receive expert guidance, support and chips to run tests on their own microscopes.
Brain-Wide Compositionality and Learning Dynamics in Biological Agents
Biological agents continually reconcile the internal states of their brain circuits with incoming sensory and environmental evidence to evaluate when and how to act. The brains of biological agents, including animals and humans, exploit many evolutionary innovations, chiefly modularity—observable at the level of anatomically-defined brain regions, cortical layers, and cell types among others—that can be repurposed in a compositional manner to endow the animal with a highly flexible behavioral repertoire. Accordingly, their behaviors show their own modularity, yet such behavioral modules seldom correspond directly to traditional notions of modularity in brains. It remains unclear how to link neural and behavioral modularity in a compositional manner. We propose a comprehensive framework—compositional modes—to identify overarching compositionality spanning specialized submodules, such as brain regions. Our framework directly links the behavioral repertoire with distributed patterns of population activity, brain-wide, at multiple concurrent spatial and temporal scales. Using whole-brain recordings of zebrafish brains, we introduce an unsupervised pipeline based on neural network models, constrained by biological data, to reveal highly conserved compositional modes across individuals despite the naturalistic (spontaneous or task-independent) nature of their behaviors. These modes provided a scaffolding for other modes that account for the idiosyncratic behavior of each fish. We then demonstrate experimentally that compositional modes can be manipulated in a consistent manner by behavioral and pharmacological perturbations. Our results demonstrate that even natural behavior in different individuals can be decomposed and understood using a relatively small number of neurobehavioral modules—the compositional modes—and elucidate a compositional neural basis of behavior. This approach aligns with recent progress in understanding how reasoning capabilities and internal representational structures develop over the course of learning or training, offering insights into the modularity and flexibility in artificial and biological agents.
Principles of Cognitive Control over Task Focus and Task
2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).
Neural mechanisms governing the learning and execution of avoidance behavior
The nervous system orchestrates adaptive behaviors by intricately coordinating responses to internal cues and environmental stimuli. This involves integrating sensory input, managing competing motivational states, and drawing on past experiences to anticipate future outcomes. While traditional models attribute this complexity to interactions between the mesocorticolimbic system and hypothalamic centers, the specific nodes of integration have remained elusive. Recent research, including our own, sheds light on the midline thalamus's overlooked role in this process. We propose that the midline thalamus integrates internal states with memory and emotional signals to guide adaptive behaviors. Our investigations into midline thalamic neuronal circuits have provided crucial insights into the neural mechanisms behind flexibility and adaptability. Understanding these processes is essential for deciphering human behavior and conditions marked by impaired motivation and emotional processing. Our research aims to contribute to this understanding, paving the way for targeted interventions and therapies to address such impairments.
Where Cognitive Neuroscience Meets Industry: Navigating the Intersections of Academia and Industry
In this talk, Mirta will share her journey from her education a mathematically-focused high school to her currently unconventional career in London, emphasizing the evolution from a local education in Croatia to international experiences in the US and UK. We will explore the concept of interdisciplinary careers in the modern world, viewing them through the framework of increasing demand, flexibility, and dynamism in the current workplace. We will underscore the significance of interdisciplinary research for launching careers outside of academia, and bolstering those within. I will challenge the conventional norm of working either in academia or industry, and encourage discussion about the opportunities for combining the two in a myriad of career opportunities. I’ll use examples from my own and others’ research to highlight opportunities for early career researchers to extend their work into practical applications. Such an approach leverages the strengths of both sectors, fostering innovation and practical applications of research findings. I hope these insights can offer valuable perspectives for those looking to navigate the evolving demands of the global job market, illustrating the advantages of a versatile skill set that spans multiple disciplines and allows extensions into exciting career options.
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Movement planning as a window into hierarchical motor control
The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.
Prosody in the voice, face, and hands changes which words you hear
Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.
Beyond Volition
Voluntary actions are actions that agents choose to make. Volition is the set of cognitive processes that implement such choice and initiation. These processes are often held essential to modern societies, because they form the cognitive underpinning for concepts of individual autonomy and individual responsibility. Nevertheless, psychology and neuroscience have struggled to define volition, and have also struggled to study it scientifically. Laboratory experiments on volition, such as those of Libet, have been criticised, often rather naively, as focussing exclusively on meaningless actions, and ignoring the factors that make voluntary action important in the wider world. In this talk, I will first review these criticisms, and then look at extending scientific approaches to volition in three directions that may enrich scientific understanding of volition. First, volition becomes particularly important when the range of possible actions is large and unconstrained - yet most experimental paradigms involve minimal response spaces. We have developed a novel paradigm for eliciting de novo actions through verbal fluency, and used this to estimate the elusive conscious experience of generativity. Second, volition can be viewed as a mechanism for flexibility, by promoting adaptation of behavioural biases. This view departs from the tradition of defining volition by contrasting internally-generated actions with externally-triggered actions, and instead links volition to model-based reinforcement learning. By using the context of competitive games to re-operationalise the classic Libet experiment, we identified a form of adaptive autonomy that allows agents to reduce biases in their action choices. Interestingly, this mechanism seems not to require explicit understanding and strategic use of action selection rules, in contrast to classical ideas about the relation between volition and conscious, rational thought. Third, I will consider volition teleologically, as a mechanism for achieving counterfactual goals through complex problem-solving. This perspective gives a key role in mediating between understanding and planning on the one hand, and instrumental action on the other hand. Taken together, these three cognitive phenomena of generativity, flexibility, and teleology may partly explain why volition is such an important cognitive function for organisation of human behaviour and human flourishing. I will end by discussing how this enriched view of volition can relate to individual autonomy and responsibility.
Establishment and aging of the neuronal DNA methylation landscape in the hippocampus
The hippocampus is a brain region with key roles in memory formation, cognitive flexibility and emotional control. Yet hippocampal function is impaired severely during aging and in neurodegenerative diseases, and impairments in hippocampal function underlie age-related cognitive decline. Accumulating evidence suggests that the deterioration of the neuron-specific epigenetic landscape during aging contributes to their progressive, age-related dysfunction. For instance, we have recently shown that aging is associated with pronounced alterations of neuronal DNA methylation patterns in the hippocampus. Because neurons are generated mostly during development with limited replacement in the adult brain, they are particularly long-lived cells and have to maintain their cell-type specific gene expression programs life-long in order to preserve brain function. Understanding the epigenetic mechanisms that underlie the establishment and long-term maintenance of neuron-specific gene expression programs, will help us to comprehend the sources and consequences of their age-related deterioration. In this talk, I will present our recent work that investigated the role of DNA methylation in the establishment of neuronal gene expression programs and neuronal function, using adult neurogenesis in the hippocampus as a model. I will then describe the effects of aging on the DNA methylation landscape in the hippocampus and discuss the malleability of the aging neuronal methylome to lifestyle and environmental stimulation.
Relations and Predictions in Brains and Machines
Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.
Developmentally structured coactivity in the hippocampal trisynaptic loop
The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.
Ambient noise reveals rapid flexibility in marmoset vocal behavior
Age differences in cortical network flexibility and motor learning ability
Prefrontal top-down projections control context-dependent strategy selection
The rules governing behavior often vary with behavioral contexts. As a result, an action rewarded in one context may be discouraged in another. Animals and humans are capable of switching between behavioral strategies under different contexts and acting adaptively according to the variable rules, a flexibility that is thought to be mediated by the prefrontal cortex (PFC). However, how the PFC orchestrates the context-dependent switch of strategies remains unclear. Here we show that pathway-specific projection neurons in the medial PFC (mPFC) differentially contribute to context-instructed strategy selection. In mice trained in a decision-making task in which a previously established rule and a newly learned rule are associated with distinct contexts, the activity of mPFC neurons projecting to the dorsomedial striatum (mPFC-DMS) encodes the contexts and further represents decision strategies conforming to the old and new rules. Moreover, mPFC-DMS neuron activity is required for the context-instructed strategy selection. In contrast, the activity of mPFC neurons projecting to the ventral midline thalamus (mPFC-VMT) does not discriminate between the contexts, and represents the old rule even if mice have adopted the new one. Furthermore, these neurons act to prevent the strategy switch under the new rule. Our results suggest that mPFC-DMS neurons promote flexible strategy selection guided by contexts, whereas mPFC-VMT neurons favor fixed strategy selection by preserving old rules.
Signal in the Noise: models of inter-trial and inter-subject neural variability
The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).
Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks
Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
Hunger state-dependent modulation of decision-making in larval Drosophila
It is critical for all animals to make appropriate, but also flexible, foraging decisions, especially when facing starvation. Sensing olfactory information is essential to evaluate food quality before ingestion. Previously, we found that <i>Drosophila</i> larvae switch their response to certain odors from aversion to attraction when food deprived. The neural mechanism underlying this switch in behavior involves serotonergic modulation and reconfiguration of odor processing in the early olfactory sensory system. We now investigate if a change in hunger state also influences other behavioral decisions. Since it had been shown that fly larvae can perform cannibalism, we investigate the effect of food deprivation on feeding on dead conspecifics. We find that fed fly larvae rarely use dead conspecifics as a food source. However, food deprivation largely enhances this behavior. We will now also investigate the underlying neural mechanisms that mediate this enhancement and compare it to the already described mechanism for a switch in olfactory choice behavior. Generally, this flexibility in foraging behavior enables the larva to explore a broader range of stimuli and to expand their feeding choices to overcome starvation.
Introducing dendritic computations to SNNs with Dendrify
Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
Chemistry of the adaptive mind: lessons from dopamine
The human brain faces a variety of computational dilemmas, including the flexibility/stability, the speed/accuracy and the labor/leisure tradeoff. I will argue that striatal dopamine is particularly well suited to dynamically regulate these computational tradeoffs depending on constantly changing task demands. This working hypothesis is grounded in evidence from recent studies on learning, motivation and cognitive control in human volunteers, using chemical PET, psychopharmacology, and/or fMRI. These studies also begin to elucidate the mechanisms underlying the huge variability in catecholaminergic drug effects across different individuals and across different task contexts. For example, I will demonstrate how effects of the most commonly used psychostimulant methylphenidate on learning, Pavlovian and effortful instrumental control depend on fluctuations in current environmental volatility, on individual differences in working memory capacity and on opportunity cost respectively.
Unchanging and changing: hardwired taste circuits and their top-down control
The taste system detects 5 major categories of ethologically relevant stimuli (sweet, bitter, umami, sour and salt) and accordingly elicits acceptance or avoidance responses. While these taste responses are innate, the taste system retains a remarkable flexibility in response to changing external and internal contexts. Taste chemicals are first recognized by dedicated taste receptor cells (TRCs) and then transmitted to the cortex via a multi-station relay. I reasoned that if I could identify taste neural substrates along this pathway, it would provide an entry to decipher how taste signals are encoded to drive innate response and modulated to facilitate adaptive response. Given the innate nature of taste responses, these neural substrates should be genetically identifiable. I therefore exploited single-cell RNA sequencing to isolate molecular markers defining taste qualities in the taste ganglion and the nucleus of the solitary tract (NST) in the brainstem, the two stations transmitting taste signals from TRCs to the brain. How taste information propagates from the ganglion to the brain is highly debated (i.e., does taste information travel in labeled-lines?). Leveraging these genetic handles, I demonstrated one-to-one correspondence between ganglion and NST neurons coding for the same taste. Importantly, inactivating one ‘line’ did not affect responses to any other taste stimuli. These results clearly showed that taste information is transmitted to the brain via labeled lines. But are these labeled lines aptly adapted to the internal state and external environment? I studied the modulation of taste signals by conflicting taste qualities in the concurrence of sweet and bitter to understand how adaptive taste responses emerge from hardwired taste circuits. Using functional imaging, anatomical tracing and circuit mapping, I found that bitter signals suppress sweet signals in the NST via top-down modulation by taste cortex and amygdala of NST taste signals. While the bitter cortical field provides direct feedback onto the NST to amplify incoming bitter signals, it exerts negative feedback via amygdala onto the incoming sweet signal in the NST. By manipulating this feedback circuit, I showed that this top-down control is functionally required for bitter evoked suppression of sweet taste. These results illustrate how the taste system uses dedicated feedback lines to finely regulate innate behavioral responses and may have implications for the context-dependent modulation of hardwired circuits in general.
The neural basis of flexible semantic cognition (BACN Mid-career Prize Lecture 2022)
Semantic cognition brings meaning to our world – it allows us to make sense of what we see and hear, and to produce adaptive thoughts and behaviour. Since we have a wealth of information about any given concept, our store of knowledge is not sufficient for successful semantic cognition; we also need mechanisms that can steer the information that we retrieve so it suits the context or our current goals. This talk traces the neural networks that underpin this flexibility in semantic cognition. It draws on evidence from multiple methods (neuropsychology, neuroimaging, neural stimulation) to show that two interacting heteromodal networks underpin different aspects of flexibility. Regions including anterior temporal cortex and left angular gyrus respond more strongly when semantic retrieval follows highly-related concepts or multiple convergent cues; the multivariate responses in these regions correspond to context-dependent aspects of meaning. A second network centred on left inferior frontal gyrus and left posterior middle temporal gyrus is associated with controlled semantic retrieval, responding more strongly when weak associations are required or there is more competition between concepts. This semantic control network is linked to creativity and also captures context-dependent aspects of meaning; however, this network specifically shows more similar multivariate responses across trials when association strength is weak, reflecting a common controlled retrieval state when more unusual associations are the focus. Evidence from neuropsychology, fMRI and TMS suggests that this semantic control network is distinct from multiple-demand cortex which supports executive control across domains, although challenging semantic tasks recruit both networks. The semantic control network is juxtaposed between regions of default mode network that might be sufficient for the retrieval of strong semantic relationships and multiple-demand regions in the left hemisphere, suggesting that the large-scale organisation of flexible semantic cognition can be understood in terms of cortical gradients that capture systematic functional transitions that are repeated in temporal, parietal and frontal cortex.
PiSpy: An Affordable, Accessible, and Flexible Imaging Platform for the Automated Observation of Organismal Biology and Behavior
A great deal of understanding can be gleaned from direct observation of organismal growth, development, and behavior. However, direct observation can be time consuming and influence the organism through unintentional stimuli. Additionally, video capturing equipment can often be prohibitively expensive, difficult to modify to one’s specific needs, and may come with unnecessary features. Here, we describe the PiSpy, a low-cost, automated video acquisition platform that uses a Raspberry Pi computer and camera to record video or images at specified time intervals or when externally triggered. All settings and controls, such as programmable light cycling, are accessible to users with no programming experience through an easy-to-use graphical user interface. Importantly, the entire PiSpy system can be assembled for less than $100 using laser-cut and 3D-printed components. We demonstrate the broad applications and flexibility of the PiSpy across a range of model and non-model organisms. Designs, instructions, and code can be accessed through an online repository, where a global community of PiSpy users can also contribute their own unique customizations and help grow the community of open-source research solutions.
Interdisciplinary College
The Interdisciplinary College is an annual spring school which offers a dense state-of-the-art course program in neurobiology, neural computation, cognitive science/psychology, artificial intelligence, machine learning, robotics and philosophy. It is aimed at students, postgraduates and researchers from academia and industry. This year's focus theme "Flexibility" covers (but not be limited to) the nervous system, the mind, communication, and AI & robotics. All this will be packed into a rich, interdisciplinary program of single- and multi-lecture courses, and less traditional formats.
How does a neuron decide when and where to make a synapse?
Precise synaptic connectivity is a prerequisite for the function of neural circuits, yet individual neurons, taken out of their developmental context, readily form unspecific synapses. How does genetically encoded brain wiring deal with this apparent contradiction? Brain wiring is a developmental growth process that is not only characterized by precision, but also flexibility and robustness. As in any other growth process, cellular interactions are restricted in space and time. Correspondingly, molecular and cellular interactions are restricted to those that 'get to see' each other during development. This seminar will explore the question how neurons decide when and where to make synapses using the Drosophila visual system as a model. New findings reveal that pattern formation during growth and the kinetics of live neuronal interactions restrict synapse formation and partner choice for neurons that are not otherwise prevented from making incorrect synapses in this system. For example, cell biological mechanisms like autophagy as well as developmental temperature restrict inappropriate partner choice through a process of kinetic exclusion that critically contributes to wiring specificity. The seminar will explore these and other neuronal strategies when and where to make synapses during developmental growth that contribute to precise, flexible and robust outcomes in brain wiring.
New Mechanisms of Extracellular Matrix Remodeling
In the adult brain, synapses are tightly enwrapped by lattices of extracellular matrix that consist of extremely long-lived molecules. These lattices are deemed to stabilize synapses, restrict the reorganization of their transmission machinery, and prevent them from undergoing structural or morphological changes. At the same time, they are expected to retain some degree of flexibility to permit occasional events of synaptic plasticity. The recent understanding that structural changes to synapses are significantly more frequent than previously assumed (occurring even on a timescale of minutes) has called for a mechanism that allows continual and energy-efficient remodeling of the ECM at synapses. I review in the talk our recent work showcasing such a process, based on the constitutive recycling of synaptic ECM molecules. I discuss the key characteristics of this mechanism, focusing on its roles in mediating synaptic transmission and plasticity, and speculate on additional potential functions in neuronal signaling.
NMC4 Keynote: An all-natural deep recurrent neural network architecture for flexible navigation
A wide variety of animals and some artificial agents can adapt their behavior to changing cues, contexts, and goals. But what neural network architectures support such behavioral flexibility? Agents with loosely structured network architectures and random connections can be trained over millions of trials to display flexibility in specific tasks, but many animals must adapt and learn with much less experience just to survive. Further, it has been challenging to understand how the structure of trained deep neural networks relates to their functional properties, an important objective for neuroscience. In my talk, I will use a combination of behavioral, physiological and connectomic evidence from the fly to make the case that the built-in modularity and structure of its networks incorporate key aspects of the animal’s ecological niche, enabling rapid flexibility by constraining learning to operate on a restricted parameter set. It is not unlikely that this is also a feature of many biological neural networks across other animals, large and small, and with and without vertebrae.
Deep kernel methods
Deep neural networks (DNNs) with the flexibility to learn good top-layer representations have eclipsed shallow kernel methods without that flexibility. Here, we take inspiration from deep neural networks to develop a new family of deep kernel method. In a deep kernel method, there is a kernel at every layer, and the kernels are jointly optimized to improve performance (with strong regularisation). We establish the representational power of deep kernel methods, by showing that they perform exact inference in an infinitely wide Bayesian neural network or deep Gaussian process. Next, we conjecture that the deep kernel machine objective is unimodal, and give a proof of unimodality for linear kernels. Finally, we exploit the simplicity of the deep kernel machine loss to develop a new family of optimizers, based on a matrix equation from control theory, that converges in around 10 steps.
Novel word generalization in comparison designs: How do young children align stimuli when they learn object nouns and relational nouns?
It is well established that the opportunity to compare learning stimuli in a novel word learning/extension task elicits a larger number of conceptually relevant generalizations than standard no-comparison conditions. I will present results suggesting that the effectiveness of comparison depends on factors such as semantic distance, number of training items, dimension distinctiveness and interactions with age. I will address these issues in the case of familiar and unfamiliar object nouns and relational nouns. The alignment strategies followed by children during learning and at test (i.e., when learning items are compared and how children reach a solution) will be described with eye-tracking data. We will also assess the extent to which children’s performance in these tasks are associated with executive functions (inhibition and flexibility) and world knowledge. Finally, we will consider these issues in children with cognitive deficits (Intellectual deficiency, DLD)
Norse: A library for gradient-based learning in Spiking Neural Networks
We introduce Norse: An open-source library for gradient-based training of spiking neural networks. In contrast to neuron simulators which mainly target computational neuroscientists, our library seamlessly integrates with the existing PyTorch ecosystem using abstractions familiar to the machine learning community. This has immediate benefits in that it provides a familiar interface, hardware accelerator support and, most importantly, the ability to use gradient-based optimization. While many parallel efforts in this direction exist, Norse emphasizes flexibility and usability in three ways. Users can conveniently specify feed-forward (convolutional) architectures, as well as arbitrarily connected recurrent networks. We strictly adhere to a functional and class-based API such that neuron primitives and, for example, plasticity rules composes. Finally, the functional core API ensures compatibility with the PyTorch JIT and ONNX infrastructure. We have made progress to support network execution on the SpiNNaker platform and plan to support other neuromorphic architectures in the future. While the library is useful in its present state, it also has limitations we will address in ongoing work. In particular, we aim to implement event-based gradient computation, using the EventProp algorithm, which will allow us to support sparse event-based data efficiently, as well as work towards support of more complex neuron models. With this library, we hope to contribute to a joint future of computational neuroscience and neuromorphic computing.
Removing information from working memory
Holding information in working memory is essential for cognition, but removing unwanted thoughts is equally important. There is great flexibility in how we can manipulate information in working memory, but the processes and consequences of these operations are poorly understood. In this talk I will discuss our recent findings using multivariate pattern analyses of fMRI brain data to demonstrate the successful removal of information from working memory using three different strategies: suppressing a specific thought, replacing a thought with a different one, and clearing the mind of all thought. These strategies are supported by distinct brain regions and have differential consequences on the encoding of new information. I will discuss implications of these results on theories of memory and I will highlight some new directions involving the use of real-time neurofeedback to investigate causal links between brain and behavior.
SimBA for Behavioral Neuroscientists
Several excellent computational frameworks exist that enable high-throughput and consistent tracking of freely moving unmarked animals. SimBA introduce and distribute a plug-and play pipeline that enables users to use these pose-estimation approaches in combination with behavioral annotation for the generation of supervised machine-learning behavioral predictive classifiers. SimBA was developed for the analysis of complex social behaviors, but includes the flexibility for users to generate predictive classifiers across other behavioral modalities with minimal effort and no specialized computational background. SimBA has a variety of extended functions for large scale batch video pre-processing, generating descriptive statistics from movement features, and interactive modules for user-defined regions of interest and visualizing classification probabilities and movement patterns.
Zero-shot visual reasoning with probabilistic analogical mapping
There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
Psychological mechanisms and functions of 5-HT and SSRIs in potential therapeutic change: Lessons from the serotonergic modulation of action selection, learning, affect, and social cognition
Uncertainty regarding which psychological mechanisms are fundamental in mediating SSRI treatment outcomes and wide-ranging variability in their efficacy has raised more questions than it has solved. Since subjective mood states are an abstract scientific construct, only available through self-report in humans, and likely involving input from multiple top-down and bottom-up signals, it has been difficult to model at what level SSRIs interact with this process. Converging translational evidence indicates a role for serotonin in modulating context-dependent parameters of action selection, affect, and social cognition; and concurrently supporting learning mechanisms, which promote adaptability and behavioural flexibility. We examine the theoretical basis, ecological validity, and interaction of these constructs and how they may or may not exert a clinical benefit. Specifically, we bridge crucial gaps between disparate lines of research, particularly findings from animal models and human clinical trials, which often seem to present irreconcilable differences. In determining how SSRIs exert their effects, our approach examines the endogenous functions of 5-HT neurons, how 5-HT manipulations affect behaviour in different contexts, and how their therapeutic effects may be exerted in humans – which may illuminate issues of translational models, hierarchical mechanisms, idiographic variables, and social cognition.
State-dependent cortical circuits
Spontaneous and sensory-evoked cortical activity is highly state-dependent, promoting the functional flexibility of cortical circuits underlying perception and cognition. Using neural recordings in combination with behavioral state monitoring, we find that arousal and motor activity have complementary roles in regulating local cortical operations, providing dynamic control of sensory encoding. These changes in encoding are linked to altered performance on perceptual tasks. Neuromodulators, such as acetylcholine, may regulate this state-dependent flexibility of cortical network function. We therefore recently developed an approach for dual mesoscopic imaging of acetylcholine release and neural activity across the entire cortical mantle in behaving mice. We find spatiotemporally heterogeneous patterns of cholinergic signaling across the cortex. Transitions between distinct behavioral states reorganize the structure of large-scale cortico-cortical networks and differentially regulate the relationship between cholinergic signals and neural activity. Together, our findings suggest dynamic state-dependent regulation of cortical network operations at the levels of both local and large-scale circuits. Zoom Meeting ID: 964 8138 3003 Contact host if you cannot connect.
Prefrontal circuits underlying cognitive flexibility
Free-falling dynamically scaled models: Foraminifera as a test case
The settling speeds of small biological particles influence the distribution of organisms such as plants, corals, and phytoplankton, but these speeds are difficult to quantify without magnification. In this talk, I highlight my novel method, using dynamic scaling principles and 3D printed models to solve this problem. Dynamic scaling involves creating models with differ in size to the original system and match the physical forces acting upon the model to the original system. I discuss the methodology behind the technique and show how it differs to previous works using dynamically scaled models. I show the flexibility of the technique and suggest how it can be applied to other free-falling particles (e.g. seeds and spores).
Stability-Flexibility Dilemma in Cognitive Control: A Dynamical System Perspective
Constraints on control-dependent processing have become a fundamental concept in general theories of cognition that explain human behavior in terms of rational adaptations to these constraints. However, theories miss a rationale for why such constraints would exist in the first place. Recent work suggests that constraints on the allocation of control facilitate flexible task switching at the expense of the stability needed to support goal-directed behavior in face of distraction. We formulate this problem in a dynamical system, in which control signals are represented as attractors and in which constraints on control allocation limit the depth of these attractors. We derive formal expressions of the stability-flexibility tradeoff, showing that constraints on control allocation improve cognitive flexibility but impair cognitive stability. We provide evidence that human participants adapt higher constraints on the allocation of control as the demand for flexibility increases but that participants deviate from optimal constraints. In continuing work, we are investigating how collaborative performance of a group of individuals can benefit from individual differences defined in terms of balance between cognitive stability and flexibility.
Electronics on the brain
One of the most important scientific and technological frontiers of our time is the interfacing of electronics with the human brain. This endeavour promises to help understand how the brain works and deliver new tools for diagnosis and treatment of pathologies including epilepsy and Parkinson’s disease. Current solutions, however, are limited by the materials that are brought in contact with the tissue and transduce signals across the biotic/abiotic interface. Recent advances in electronics have made available materials with a unique combination of attractive properties, including mechanical flexibility, mixed ionic/electronic conduction, enhanced biocompatibility, and capability for drug delivery. Professor Malliaras will present examples of novel devices for recording and stimulation of neurons and show that organic electronic materials offer tremendous opportunities to study the brain and treat its pathologies.
State-dependent cortical circuits
Spontaneous and sensory-evoked cortical activity is highly state-dependent, promoting the functional flexibility of cortical circuits underlying perception and cognition. Using neural recordings in combination with behavioral state monitoring, we find that arousal and motor activity have complementary roles in regulating local cortical operations, providing dynamic control of sensory encoding. These changes in encoding are linked to altered performance on perceptual tasks. Neuromodulators, such as acetylcholine, may regulate this state-dependent flexibility of cortical network function. We therefore recently developed an approach for dual mesoscopic imaging of acetylcholine release and neural activity across the entire cortical mantle in behaving mice. We find spatiotemporally heterogeneous patterns of cholinergic signaling across the cortex. Transitions between distinct behavioral states reorganize the structure of large-scale cortico-cortical networks and differentially regulate the relationship between cholinergic signals and neural activity. Together, our findings suggest dynamic state-dependent regulation of cortical network operations at the levels of both local and large-scale circuits.
State-dependent regulation of cortical circuits
Spontaneous and sensory-evoked cortical activity is highly state-dependent, promoting the functional flexibility of cortical circuits underlying perception and cognition. Using neural recordings in combination with behavioral state monitoring, we find that arousal and motor activity have complementary roles in regulating local cortical operations, providing dynamic control of sensory encoding. These changes in encoding are linked to altered performance on perceptual tasks. Neuromodulators, such as acetylcholine, may regulate this state-dependent flexibility of cortical network function. We therefore recently developed an approach for dual mesoscopic imaging of acetylcholine release and neural activity across the entire cortical mantle in behaving mice. We find spatiotemporally heterogeneous patterns of cholinergic signaling across the cortex. Transitions between distinct behavioral states reorganize the structure of large-scale cortico-cortical networks and differentially regulate the relationship between cholinergic signals and neural activity. Together, our findings suggest dynamic state-dependent regulation of cortical network operations at the levels of both local and large-scale circuits.
Building a synthetic cell: Understanding the clock design and function
Clock networks containing the same central architectures may vary drastically in their potential to oscillate, raising the question of what controls robustness, one of the essential functions of an oscillator. We computationally generate an atlas of oscillators and found that, while core topologies are critical for oscillations, local structures substantially modulate the degree of robustness. Strikingly, two local structures, incoherent and coherent inputs, can modify a core topology to promote and attenuate its robustness, additively. The findings underscore the importance of local modifications to the performance of the whole network. It may explain why auxiliary structures not required for oscillations are evolutionary conserved. We also extend this computational framework to search hidden network motifs for other clock functions, such as tunability that relates to the capabilities of a clock to adjust timing to external cues. Experimentally, we developed an artificial cell system in water-in-oil microemulsions, within which we reconstitute mitotic cell cycles that can perform self-sustained oscillations for 30 to 40 cycles over multiple days. The oscillation profiles, such as period, amplitude, and shape, can be quantitatively varied with the concentrations of clock regulators, energy levels, droplet sizes, and circuit design. Such innate flexibility makes it crucial to studying clock functions of tunability and stochasticity at the single-cell level. Combined with a pressure-driven multi-channel tuning setup and long-term time-lapse fluorescence microscopy, this system enables a high-throughput exploration in multi-dimension continuous parameter space and single-cell analysis of the clock dynamics and functions. We integrate this experimental platform with mathematical modeling to elucidate the topology-function relation of biological clocks. With FRET and optogenetics, we also investigate spatiotemporal cell-cycle dynamics in both homogeneous and heterogeneous microenvironments by reconstructing subcellular compartments.
The geometry of abstraction in hippocampus and pre-frontal cortex
The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. Here we characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.
On temporal coding in spiking neural networks with alpha synaptic function
The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.
The complexity of the ordinary – neural control of locomotion
Today, considerable information is available on the organization and operation of the neural networks that generate the motor output for animal locomotion, such as swimming, walking, or flying. In recent years, the question of which neural mechanisms are responsible for task-specific and flexible adaptations of locomotor patterns has gained increased attention in the field of motor control. I will report on advances we made with respect to this topic for walking in insects, i.e. the leg muscle control system of phasmids and fruit flies. I will present insights into the neural basis of speed control, heading, walking direction, and the role of ground contact in insect walking, both for local control and intersegmental coordination. For these changes in motor activity modifications in the processing of sensory feedback signals play a pivotal role, for instance for movement and load signals in heading and curve walking or for movement signals that contribute to intersegmental coordination. Our recent findings prompt future investigations that aim to elucidate the mechanisms by which descending and intersegmental signals interact with local networks in the generation of motor flexibility during walking in animals.
The geometry of abstraction in artificial and biological neural networks
The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.
Mean-field models for finite-size populations of spiking neurons
Firing-rate (FR) or neural-mass models are widely used for studying computations performed by neural populations. Despite their success, classical firing-rate models do not capture spike timing effects on the microscopic level such as spike synchronization and are difficult to link to spiking data in experimental recordings. For large neuronal populations, the gap between the spiking neuron dynamics on the microscopic level and coarse-grained FR models on the population level can be bridged by mean-field theory formally valid for infinitely many neurons. It remains however challenging to extend the resulting mean-field models to finite-size populations with biologically realistic neuron numbers per cell type (mesoscopic scale). In this talk, I present a mathematical framework for mesoscopic populations of generalized integrate-and-fire neuron models that accounts for fluctuations caused by the finite number of neurons. To this end, I will introduce the refractory density method for quasi-renewal processes and show how this method can be generalized to finite-size populations. To demonstrate the flexibility of this approach, I will show how synaptic short-term plasticity can be incorporated in the mesoscopic mean-field framework. On the other hand, the framework permits a systematic reduction to low-dimensional FR equations using the eigenfunction method. Our modeling framework enables a re-examination of classical FR models in computational neuroscience under biophysically more realistic conditions.
Neural control of vocal interactions in songbirds
During conversations we rapidly switch between listening and speaking which often requires withholding or delaying our speech in order to hear others and avoid overlapping. This capacity for vocal turn-taking is exhibited by non-linguistic species as well, however the neural circuit mechanisms that enable us to regulate the precise timing of our vocalizations during interactions are unknown. We aim to identify the neural mechanisms underlying the coordination of vocal interactions. Therefore, we paired zebra finches with a vocal robot (1Hz call playback) and measured the bird’s call response times. We found that individual birds called with a stereotyped delay in respect to the robot call. Pharmacological inactivation of the premotor nucleus HVC revealed its necessity for the temporal coordination of calls. We further investigated the contributing neural activity within HVC by performing intracellular recordings from premotor neurons and inhibitory interneurons in calling zebra finches. We found that inhibition is preceding excitation before and during call onset. To test whether inhibition guides call timing we pharmacologically limited the impact of inhibition on premotor neurons. As a result zebra finches converged on a similar delay time i.e. birds called more rapidly after the vocal robot call suggesting that HVC inhibitory interneurons regulate the coordination of social contact calls. In addition, we aim to investigate the vocal turn-taking capabilities of the common nightingale. Male nightingales learn over 100 different song motifs which are being used in order to attract mates or defend territories. Previously, it has been shown that nightingales counter-sing with each other following a similar temporal structure to human vocal turn-taking. These animals are also able to spontaneously imitate a motif of another nightingale. The neural mechanisms underlying this behaviour are not yet understood. In my lab, we further probe the capabilities of these animals in order to access the dynamic range of their vocal turn taking flexibility.
The cost of behavioral flexibility: a modeling study of reversal learning using a spiking neural network
Bernstein Conference 2024
Defining the role of a locus coeruleus-orbitofrontal cortex circuit in behavioral flexibility
COSYNE 2022
Neural network size balances representational drift and flexibility during Bayesian sampling
COSYNE 2022
Neural network size balances representational drift and flexibility during Bayesian sampling
COSYNE 2022
Revisiting the flexibility-stability dilemma in recurrent networks using a multiplicative plasticity rule
COSYNE 2022
Revisiting the flexibility-stability dilemma in recurrent networks using a multiplicative plasticity rule
COSYNE 2022
Thalamic role in human cognitive flexibility and routing of abstract information.
COSYNE 2022
Thalamic role in human cognitive flexibility and routing of abstract information.
COSYNE 2022
Cortical-bulbar feedback supports behavioral flexibility during rule reversal
COSYNE 2023
Dynamic gating of perceptual flexibility by non-classically responsive cortical neurons
COSYNE 2023
Harnessing the flexibility of neural networks to predict meaningful theoretical parameters in a multi-armed bandit task
COSYNE 2023
Intracranial electrophysiological evidence for a novel neuro-computational mechanism of cognitive flexibility in humans
COSYNE 2023
Maturing neurons and dual structural plasticity enable flexibility and stability of olfactory memory
COSYNE 2023
Flexibility of signaling across and within visual cortical areas V1 and V2
COSYNE 2025
A model linking neural population activity to flexibility in sensorimotor control
COSYNE 2025
Modeling the flexibility of cortical control of motor units
COSYNE 2025
Multiplicative thalamocortical couplings facilitate rapid computation and cognitive flexibility
COSYNE 2025
Network Gain Regulates Stability and Flexibility in a Ring Attractor Network
COSYNE 2025
An activity-dependent local transport regulation via local synthesis of kinesin superfamily proteins (KIFs) underlying cognitive flexibility
FENS Forum 2024
Adolescent stress impairs behavioural flexibility in adults through population-specific alterations to ventral hippocampal circuits
FENS Forum 2024
Attentional set-shifting task: An approach to assess prefrontal activity patterns during behavioral flexibility in aged mice
FENS Forum 2024
Cognitive flexibility and anterior cingulate gray matter volumes correlate with serum levels of brevican in healthy humans
FENS Forum 2024
Continued treatment of D-Pinitol ameliorates cognitive spatial flexibility of Alzheimer’s disease 5xFAD transgenic mice
FENS Forum 2024
Developmental cell death of interneurons and oligodendroglia is required for cognitive flexibility in mice
FENS Forum 2024
Dissecting prefrontal contributions to behavioral flexibility in freely moving mice
FENS Forum 2024
The estrogen-immune axis: A key regulator of behavioural inflexibility
FENS Forum 2024
HBK-15, a multimodal compound, mitigates cognitive flexibility deficits in mice
FENS Forum 2024
Higher-order thalamo-motor cortex circuit supports behavioral flexibility by reinforcing decision-value
FENS Forum 2024
Impaired flexibility during social learning in NLGN3-R451C ASD model
FENS Forum 2024
Neuronal signature of cognitive flexibility in the prefrontal-dorsal raphe circuit
FENS Forum 2024
A novel touch-panel-based serial reversal learning task for assessing cognitive flexibility in mice
FENS Forum 2024
Overtraining enhances behavioural flexibility on a serial reversal learning task
FENS Forum 2024
Projections from the medial prefrontal cortex to the ventral midline thalamus are crucial for cognitive flexibility in rats
FENS Forum 2024
A shared neural code for flexible shifts in attention, motor actions, and goal setting? The role of theta and alpha oscillations for human flexibility
FENS Forum 2024
Training a nonstationary recurrent neuronal network for inferring neuronal dynamics during flexibility in a value-based decision-making
FENS Forum 2024