Stability
stability
Tina Eliassi-Rad
The RADLAB at Northeastern University’s Network Science Institute has two postdoctoral positions available. We are looking for exceptional candidates who are interested in the following programs: 1. Trustworthy Network Science: As the use of machine learning in network science grows, so do the issues of stability, robustness, explainability, transparency, and fairness, to name a few. We address issues of trustworthy ML in network science. 2. Just Machine Learning: Machine learning systems are not islands. They are part of broader complex systems. To understand and mitigate the risks and harms of using machine learning, we remove our optimization blinders and study the broader complex systems in which machine learning systems operate.
Brain circuits for spatial navigation
In this webinar on spatial navigation circuits, three researchers—Ann Hermundstad, Ila Fiete, and Barbara Webb—discussed how diverse species solve navigation problems using specialized yet evolutionarily conserved brain structures. Hermundstad illustrated the fruit fly’s central complex, focusing on how hardwired circuit motifs (e.g., sinusoidal steering curves) enable rapid, flexible learning of goal-directed navigation. This framework combines internal heading representations with modifiable goal signals, leveraging activity-dependent plasticity to adapt to new environments. Fiete explored the mammalian head-direction system, demonstrating how population recordings reveal a one-dimensional ring attractor underlying continuous integration of angular velocity. She showed that key theoretical predictions—low-dimensional manifold structure, isometry, uniform stability—are experimentally validated, underscoring parallels to insect circuits. Finally, Webb described honeybee navigation, featuring path integration, vector memories, route optimization, and the famous waggle dance. She proposed that allocentric velocity signals and vector manipulation within the central complex can encode and transmit distances and directions, enabling both sophisticated foraging and inter-bee communication via dance-based cues.
Principles of Cognitive Control over Task Focus and Task
2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).
Stability of visual processing in passive and active vision
The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Self as Processes (BACN Mid-career Prize Lecture 2023)
An understanding of the self helps explain not only human thoughts, feelings, attitudes but also many aspects of everyday behaviour. This talk focuses on a viewpoint - self as processes. This viewpoint emphasizes the dynamics of the self that best connects with the development of the self over time and its realist orientation. We are combining psychological experiments and data mining to comprehend the stability and adaptability of the self across various populations. In this talk, I draw on evidence from experimental psychology, cognitive neuroscience, and machine learning approaches to demonstrate why and how self-association affects cognition and how it is modulated by various social experiences and situational factors
Feedback control in the nervous system: from cells and circuits to behaviour
The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.
Developmentally structured coactivity in the hippocampal trisynaptic loop
The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.
The strongly recurrent regime of cortical networks
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.
Hippocampal network dynamics during impaired working memory in epileptic mice
Memory impairment is a common cognitive deficit in temporal lobe epilepsy (TLE). The hippocampus is severely altered in TLE exhibiting multiple anatomical changes that lead to a hyperexcitable network capable of generating frequent epileptic discharges and seizures. In this study we investigated whether hippocampal involvement in epileptic activity drives working memory deficits using bilateral LFP recordings from CA1 during task performance. We discovered that epileptic mice experienced focal rhythmic discharges (FRDs) while they performed the spatial working memory task. Spatial correlation analysis revealed that FRDs were often spatially stable on the maze and were most common around reward zones (25 ‰) and delay zones (50 ‰). Memory performance was correlated with stability of FRDs, suggesting that spatially unstable FRDs interfere with working memory codes in real time.
How do visual abilities relate to each other?
In vision, there is, surprisingly, very little evidence of common factors. Most studies have found only weak correlations between performance in different visual tests; meaning that, a participant performing better in one test is not more likely to perform also better in another test. Likewise in ageing, cross-sectional studies have repeatedly shown that older adults show deteriorated performance in most visual tests compared to young adults. However, within the older population, there is no evidence for a common factor underlying visual abilities. To investigate further the decline of visual abilities, we performed a longitudinal study with a battery of nine visual tasks three times, with two re-tests after about 4 and 7 years. Most visual abilities are rather stable across 7 years, but not visual acuity. I will discuss possible causes of these paradoxical outcomes.
Odd dynamics of living chiral crystals
The emergent dynamics exhibited by collections of living organisms often shows signatures of symmetries that are broken at the single-organism level. At the same time, organism development itself encompasses a well-coordinated sequence of symmetry breaking events that successively transform a single, nearly isotropic cell into an animal with well-defined body axis and various anatomical asymmetries. Combining these key aspects of collective phenomena and embryonic development, we describe here the spontaneous formation of hydrodynamically stabilized active crystals made of hundreds of starfish embryos that gather during early development near fluid surfaces. We describe a minimal hydrodynamic theory that is fully parameterized by experimental measurements of microscopic interactions among embryos. Using this theory, we can quantitatively describe the stability, formation and rotation of crystals and rationalize the emergence of mechanical properties that carry signatures of an odd elastic material. Our work thereby quantitatively connects developmental symmetry breaking events on the single-embryo level with remarkable macroscopic material properties of a novel living chiral crystal system.
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Membrane mechanics meet minimal manifolds
Changes in the geometry and topology of self-assembled membranes underlie diverse processes across cellular biology and engineering. Similar to lipid bilayers, monolayer colloidal membranes studied by the Sharma (IISc Bangalore) and Dogic (UCSB) Labs have in-plane fluid-like dynamics and out-of-plane bending elasticity, but their open edges and micron length scale provide a tractable system to study the equilibrium energetics and dynamic pathways of membrane assembly and reconfiguration. First, we discuss how doping colloidal membranes with short miscible rods transforms disk-shaped membranes into saddle-shaped minimal surfaces with complex edge structures. Theoretical modeling demonstrates that their formation is driven by increasing positive Gaussian modulus, which in turn is controlled by the fraction of short rods. Further coalescence of saddle-shaped surfaces leads to exotic topologically distinct structures, including shapes similar to catenoids, tri-noids, four-noids, and higher order structures. We then mathematically explore the mechanics of these catenoid-like structures subject to an external axial force and elucidate their intimate connection to two problems whose solutions date back to Euler: the shape of an area-minimizing soap film and the buckling of a slender rod under compression. A perturbation theory argument directly relates the tensions of membranes to the stability properties of minimal surfaces. We also investigate the effects of including a Gaussian curvature modulus, which, for small enough membranes, causes the axial force to diverge as the ring separation approaches its maximal value.
Chemistry of the adaptive mind: lessons from dopamine
The human brain faces a variety of computational dilemmas, including the flexibility/stability, the speed/accuracy and the labor/leisure tradeoff. I will argue that striatal dopamine is particularly well suited to dynamically regulate these computational tradeoffs depending on constantly changing task demands. This working hypothesis is grounded in evidence from recent studies on learning, motivation and cognitive control in human volunteers, using chemical PET, psychopharmacology, and/or fMRI. These studies also begin to elucidate the mechanisms underlying the huge variability in catecholaminergic drug effects across different individuals and across different task contexts. For example, I will demonstrate how effects of the most commonly used psychostimulant methylphenidate on learning, Pavlovian and effortful instrumental control depend on fluctuations in current environmental volatility, on individual differences in working memory capacity and on opportunity cost respectively.
Neural Representations of Social Homeostasis
How does our brain rapidly determine if something is good or bad? How do we know our place within a social group? How do we know how to behave appropriately in dynamic environments with ever-changing conditions? The Tye Lab is interested in understanding how neural circuits important for driving positive and negative motivational valence (seeking pleasure or avoiding punishment) are anatomically, genetically and functionally arranged. We study the neural mechanisms that underlie a wide range of behaviors ranging from learned to innate, including social, feeding, reward-seeking and anxiety-related behaviors. We have also become interested in “social homeostasis” -- how our brains establish a preferred set-point for social contact, and how this maintains stability within a social group. How are these circuits interconnected with one another, and how are competing mechanisms orchestrated on a neural population level? We employ optogenetic, electrophysiological, electrochemical, pharmacological and imaging approaches to probe these circuits during behavior.
Extrinsic control and autonomous computation in the hippocampal CA1 circuit
In understanding circuit operations, a key issue is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Because pyramidal cells in CA1 do not have local recurrent projections, it is currently assumed that firing in CA1 is inherited from its inputs – thus, entorhinal inputs provide communication with the rest of the neocortex and the outside world, whereas CA3 inputs provide internal and past memory representations. Several studies have attempted to prove this hypothesis, by lesioning or silencing either area CA3 or the entorhinal cortex and examining the effect of firing on CA1 pyramidal cells. Despite the intense and careful work in this research area, the magnitudes and types of the reported physiological impairments vary widely across experiments. At least part of the existing variability and conflicts is due to the different behavioral paradigms, designs and evaluation methods used by different investigators. Simultaneous manipulations in the same animal or even separate manipulations of the different inputs to the hippocampal circuits in the same experiment are rare. To address these issues, I used optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of the entire CA3 region. I combined this with high spatial resolution recording of local field potentials (LFP) in the CA1-dentate axis and simultaneously collected firing pattern data from thousands of single neurons. Each experimental animal had up to two of these manipulations being performed simultaneously. Silencing the medial entorhinal (mEC) largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields, and reliable assembly expression as in the intact mouse. Thus, the CA1 network can maintain autonomous computation to support coordinated place cell assemblies without reliance on its inputs, yet these inputs can effectively reconfigure and assist in maintaining stability of the CA1 map.
Untitled Seminar
The nature of facial information that is stored by humans to recognise large amounts of faces is unclear despite decades of research in the field. To complicate matters further, little is known about how representations may evolve as novel faces become familiar, and there are large individual differences in the ability to recognise faces. I will present a theory I am developing and that assumes that facial representations are cost-efficient. In that framework, individual facial representations would incorporate different diagnostic features in different faces, regardless of familiarity, and would evolve depending on the relative stability in appearance over time. Further, coarse information would be prioritised over fine details in order to decrease storage demands. This would create low-cost facial representations that refine over time if appearance changes. Individual differences could partly rest on that ability to refine representation if needed. I will present data collected in the general population and in participants with developmental prosopagnosia. In support of the proposed view, typical observers and those with developmental prosopagnosia seem to rely on coarse peripheral features when they have no reason to expect someone’s appearance will change in the future.
Taming chaos in neural circuits
Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.
Keeping your Brain in Balance: the Ups and Downs of Homeostatic Plasticity (virtual)
Our brains must generate and maintain stable activity patterns over decades of life, despite the dramatic changes in circuit connectivity and function induced by learning and experience-dependent plasticity. How do our brains acheive this balance between opposing need for plasticity and stability? Over the past two decades, we and others have uncovered a family of “homeostatic” negative feedback mechanisms that are theorized to stabilize overall brain activity while allowing specific connections to be reconfigured by experience. Here I discuss recent work in which we demonstrate that individual neocortical neurons in freely behaving animals indeed have a homeostatic activity set-point, to which they return in the face of perturbations. Intriguingly, this firing rate homeostasis is gated by sleep/wake states in a manner that depends on the direction of homeostatic regulation: upward-firing rate homeostasis occurs selectively during periods of active wake, while downward-firing rate homeostasis occurs selectively during periods of sleep, suggesting that an important function of sleep is to temporally segregate bidirectional plasticity. Finally, we show that firing rate homeostasis is compromised in an animal model of autism spectrum disorder. Together our findings suggest that loss of homeostatic plasticity in some neurological disorders may render central circuits unable to compensate for the normal perturbations induced by development and learning.
Network mechanisms underlying representational drift in area CA1 of hippocampus
Recent chronic imaging experiments in mice have revealed that the hippocampal code exhibits non-trivial turnover dynamics over long time scales. Specifically, the subset of cells which are active on any given session in a familiar environment changes over the course of days and weeks. While some cells transition into or out of the code after a few sessions, others are stable over the entire experiment. The mechanisms underlying this turnover are unknown. Here we show that the statistics of turnover are consistent with a model in which non-spatial inputs to CA1 pyramidal cells readily undergo plasticity, while spatially tuned inputs are largely stable over time. The heterogeneity in stability across the cell assembly, as well as the decrease in correlation of the population vector of activity over time, are both quantitatively fit by a simple model with Gaussian input statistics. In fact, such input statistics emerge naturally in a network of spiking neurons operating in the fluctuation-driven regime. This correspondence allows one to map the parameters of a large-scale spiking network model of CA1 onto the simple statistical model, and thereby fit the experimental data quantitatively. Importantly, we show that the observed drift is entirely consistent with random, ongoing synaptic turnover. This synaptic turnover is, in turn, consistent with Hebbian plasticity related to continuous learning in a fast memory system.
Separable pupillary signatures of perception and action during perceptual multistability
The pupil provides a rich, non-invasive measure of the neural bases of perception and cognition, and has been of particular value in uncovering the role of arousal-linked neuromodulation, which alters cortical processing as well as pupil size. But pupil size is subject to a multitude of influences, which complicates unique interpretation. We measured pupils of observers experiencing perceptual multistability -- an ever-changing subjective percept in the face of unchanging but inconclusive sensory input. In separate conditions the endogenously generated perceptual changes were either task-relevant or not, allowing a separation between perception-related and task-related pupil signals. Perceptual changes were marked by a complex pupil response that could be decomposed into two components: a dilation tied to task execution and plausibly indicative of an arousal-linked noradrenaline surge, and an overlapping constriction tied to the perceptual transient and plausibly a marker of altered visual cortical representation. Constriction, but not dilation, amplitude systematically depended on the time interval between perceptual changes, possibly providing an overt index of neural adaptation. These results show that the pupil provides a simultaneous reading on interacting but dissociable neural processes during perceptual multistability, and suggest that arousal-linked neuromodulation shapes action but not perception in these circumstances. This presentation covers work that was published in e-life
Brain chart for the human lifespan
Over the past few decades, neuroimaging has become a ubiquitous tool in basic research and clinical studies of the human brain. However, no reference standards currently exist to quantify individual differences in neuroimaging metrics over time, in contrast to growth charts for anthropometric traits such as height and weight. Here, we built an interactive resource to benchmark brain morphology, www.brainchart.io, derived from any current or future sample of magnetic resonance imaging (MRI) data. With the goal of basing these reference charts on the largest and most inclusive dataset available, we aggregated 123,984 MRI scans from 101,457 participants aged from 115 days post-conception through 100 postnatal years, across more than 100 primary research studies. Cerebrum tissue volumes and other global or regional MRI metrics were quantified by centile scores, relative to non-linear trajectories of brain structural changes, and rates of change, over the lifespan. Brain charts identified previously unreported neurodevelopmental milestones; showed high stability of individual centile scores over longitudinal assessments; and demonstrated robustness to technical and methodological differences between primary studies. Centile scores showed increased heritability compared to non-centiled MRI phenotypes, and provided a standardised measure of atypical brain structure that revealed patterns of neuroanatomical variation across neurological and psychiatric disorders. In sum, brain charts are an essential first step towards robust quantification of individual deviations from normative trajectories in multiple, commonly-used neuroimaging phenotypes. Our collaborative study proves the principle that brain charts are achievable on a global scale over the entire lifespan, and applicable to analysis of diverse developmental and clinical effects on human brain structure.
NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule
Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.
NMC4 Short Talk: Synchronization in the Connectome: Metastable oscillatory modes emerge from interactions in the brain spacetime network
The brain exhibits a rich repertoire of oscillatory patterns organized in space, time and frequency. However, despite ever more-detailed characterizations of spectrally-resolved network patterns, the principles governing oscillatory activity at the system-level remain unclear. Here, we propose that the transient emergence of spatially organized brain rhythms are signatures of weakly stable synchronization between subsets of brain areas, naturally occurring at reduced collective frequencies due to the presence of time delays. To test this mechanism, we build a reduced network model representing interactions between local neuronal populations (with damped oscillatory response at 40Hz) coupled in the human neuroanatomical network. Following theoretical predictions, weakly stable cluster synchronization drives a rich repertoire of short-lived (or metastable) oscillatory modes, whose frequency inversely depends on the number of units, the strength of coupling and the propagation times. Despite the significant degree of reduction, we find a range of model parameters where the frequencies of collective oscillations fall in the range of typical brain rhythms, leading to an optimal fit of the power spectra of magnetoencephalographic signals from 89 heathy individuals. These findings provide a mechanistic scenario for the spontaneous emergence of frequency-specific long-range phase-coupling observed in magneto- and electroencephalographic signals as signatures of resonant modes emerging in the space-time structure of the Connectome, reinforcing the importance of incorporating realistic time delays in network models of oscillatory brain activity.
NMC4 Short Talk: Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony
A myriad of pathological changes associated with epilepsy, including the loss of specific cell types, improper expression of individual ion channels, and synaptic sprouting, can be recast as decreases in cell and circuit heterogeneity. In recent experimental work, we demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells, and past theoretical work has shown that neuronal heterogeneity improves a neural circuit’s ability to encode information. Viewed alongside the fact that seizure is an information-poor brain state, these findings motivate the hypothesis that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to seizure onset. By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We directly implement experimentally-obtained heterogeneity levels in cortical excitatory-inhibitory (E-I) stochastic spiking network models. Low heterogeneity networks display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis. Mean-field analysis reveals a distinct mathematical structure in these networks distinguished by multi-stability. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input-output response functions explains the counter-intuitive experimentally observed reduction in single-cell excitability in epileptogenic neurons. This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in mathematically characterized effects of heterogeneity. These interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure and a new lens through which to view epilepsy, the most common serious neurological disorder in the world, that could reveal new targets for clinical treatment.
The dynamics of temporal attention
Selection is the hallmark of attention: processing improves for attended items but is relatively impaired for unattended items. It is well known that visual spatial attention changes sensory signals and perception in this selective fashion. In the work I will present, we asked whether and how attentional selection happens across time. First, our experiments revealed that voluntary temporal attention (attention to specific points in time) is selective, resulting in perceptual tradeoffs across time. Second, we measured small eye movements called microsaccades and found that directing voluntary temporal attention increases the stability of the eyes in anticipation of an attended stimulus. Third, we developed a computational model of dynamic attention, which proposes specific mechanisms underlying temporal attention and its selectivity. Lastly, I will mention how we are testing predictions of the model with MEG. Altogether, this research shows how precisely timed voluntary attention helps manage inherent limits in visual processing across short time intervals, advancing our understanding of attention as a dynamic process.
Wiring & Rewiring: Experience-Dependent Circuit Development and Plasticity in Sensory Cortices
To build an appropriate representation of the sensory stimuli around the world, neural circuits are wired according to both intrinsic factors and external sensory stimuli. Moreover, the brain circuits have the capacity to rewire in response to altered environment, both during early development and throughout life. In this talk, I will give an overview about my past research in studying the dynamic processes underlying functional maturation and plasticity in rodent sensory cortices. I will also present data about the current and future research in my lab – that is, the synaptic and circuit mechanisms by which the mature brain circuits employ to regulate the balance between stability and plasticity. By applying chronic 2-photon calcium and close-loop visual exposure, we studied the circuit changes at single-neuron resolution to show that concurrent running with visual stimulus is required to drive neuroplasticity in the adult brain.
Noise-induced properties of active dendrites
Neuronal dendritic trees display a wide range of nonlinear input integrations due to their voltage-dependent active calcium channels. We reveal that in vivo-like fluctuating input enhances nonlinearity substantially in a single dendritic compartment and shifts the input-output relation to exhibiting nonmonotonous or bistable dynamics. In particular, with the slow activation of calcium dynamics, we analyze noise-induced bistability and its timescales. We show bistability induces long-timescale fluctuation that can account for observed dendritic plateau potentials in vivo conditions. In a multicompartmental model neuron with realistic synaptic input, we show that noise-induced bistability persists in a wide range of parameters. Using Fredholm's theory to calculate the spiking rate of multivariable neurons, we discuss how dendritic bistability shifts the spiking dynamics of single neurons and its implications for network phenomena in the processing of in vivo–like fluctuating input.
Visual Decisions in Natural Action
Natural behavior reveals the way that gaze serves the needs of the current task, and the complex cognitive control mechanisms that are involved. It has become increasingly clear that even the simplest actions involve complex decision processes that depend on an interaction of visual information, knowledge of the current environment, and the intrinsic costs and benefits of actions choices. I will explore these ideas in the context of walking in natural terrain, where we are able to recover the 3D structure of the visual environment. We show that subjects choose flexible paths that depend on the flatness of the terrain over the next few steps. Subjects trade off flatness with straightness of their paths towards the goal, indicating a nuanced trade-off between stability and energetic costs on both the time scale of the next step and longer-range constraints.
Nonequilibrium self-assembly and time-irreversibility in living systems
Far-from-equilibrium processes constantly dissipate energy while converting a free-energy source to another form of energy. Living systems, for example, rely on an orchestra of molecular motors that consume chemical fuel to produce mechanical work. In this talk, I will describe two features of life, namely, time-irreversibility, and nonequilibrium self-assembly. Time irreversibility is the hallmark of nonequilibrium dissipative processes. Detecting dissipation is essential for our basic understanding of the underlying physical mechanism, however, it remains a challenge in the absence of observable directed motion, flows, or fluxes. Additional difficulty arises in complex systems where many internal degrees of freedom are inaccessible to an external observer. I will introduce a novel approach to detect time irreversibility and estimate the entropy production from time-series measurements, even in the absence of observable currents. This method can be implemented in scenarios where only partial information is available and thus provides a new tool for studying nonequilibrium phenomena. Further, I will explore the added benefits achieved by nonequilibrium driving for self-assembly, identify distinctive collective phenomena that emerge in a nonequilibrium self-assembly setting, and demonstrate the interplay between the assembly speed, kinetic stability, and relative population of dynamical attractors.
Self-organized formation of discrete grid cell modules from smooth gradients
Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.
Understanding the role of neural heterogeneity in learning
The brain has a hugely diverse and heterogeneous nature. The exact role of heterogeneity has been relatively little explored as most neural models tend to be largely homogeneous. We trained spiking neural networks with varying degrees of heterogeneity on complex real-world tasks and found that heterogeneity resulted in more stable and robust training and improved training performance, especially for tasks with a higher temporal structure. Moreover, the optimal distribution of parameters found by training was found to be similar to experimental observations. These findings suggest that heterogeneity is not simply a result of noisy biological processes, but it may play a crucial role for learning in complex, changing environments.
Reverse engineering Hydra
Hydra is an extraordinary creature. Continuously replacing itself, it can live indefinitely, performing a stable repertoire of reasonably sophisticated behaviors. This remarkable stability under plasticity may be due to the uniform nature of its nervous system, which consists of two apparently noncommunicating nerve net layers. We use modeling to understand the role of active muscles and biomechanics interact with neural activity to shape Hydra behaviour. We will discuss our findings and thoughts on how this simple nervous system may self-organize to produce purposeful behavior.
The role of motion in localizing objects
Everything we see has a location. We know where things are before we know what they are. But how do we know where things are? Receptive fields in the visual system specify location but neural delays lead to serious errors whenever targets or eyes are moving. Motion may be the problem here but motion can also be the solution, correcting for the effects of delays and eye movements. To demonstrate this, I will present results from three motion illusions where perceived location differs radically from physical location. These help understand how and where position is coded. We first look at the effects of a target’s simple forward motion on its perceived location. Second, we look at perceived location of a target that has internal motion as well as forward motion. The two directions combine to produce an illusory path. This “double-drift” illusion strongly affects perceived position but, surprisingly, not eye movements or attention. Even more surprising, fMRI shows that the shifted percept does not emerge in the visual cortex but is seen instead in the frontal lobes. Finally, we report that a moving frame also shifts the perceived positions of dots flashed within it. Participants report the dot positions relative to the frame, as if the frame were not moving. These frame-induced position effects suggest a link to visual stability where we see a steady world despite massive displacements during saccades. These motion-based effects on perceived location lead to new insights concerning how and where position is coded in the brain.
Internal structure of honey bee swarms for mechanical stability and division of labor
The western honey bee (Apis mellifera) is a domesticated pollinator famous for living in highly social colonies. In the spring, thousands of worker bees and a queen fly from their hive in search of a new home. They self-assemble into a swarm that hangs from a tree branch for several days. We reconstruct the non-isotropic arrangement of worker bees inside swarms made up of 3000 - 8000 bees using x-ray computed tomography. Some bees are stationary and hang from the attachment board or link their bodies into hanging chains to support the swarm structure. The remaining bees use the chains as pathways to walk around the swarm, potentially to feed the queen or communicate with one another. The top layers of bees bear more weight per bee than the remainder of the swarm, suggesting that bees are optimizing for additional factors besides weight distribution. Despite not having a clear leader, honey bees are able to organize into a swarm that protects the queen and remains stable until scout bees locate a new hive.
Sleepless in Vienna - how to rescue folding-deficient dopamine transporters by pharmacochaperoning
Diseases that arise from misfolding of an individual protein are rare. However, collectively, these folding diseases represent a large proportion of hereditary and acquired disorders. In fact, the term "Molecular Medicine" was coined by Linus Pauling in conjunction with the study of a folding disease, i.e. sickle cell anemia. In the past decade, we have witnessed an exponential growth in the number of mutations, which have been identified in genes encoding solute carriers (SLC). A sizable faction - presumably the majority - of these mutations result in misfolding of the encoded protein. While studying the export of the GABA transporter (SLC6A1) and of the serotonin transporter (SLC6A4), from the endoplasmic reticulum (ER), we discovered by serendipity that some ligands can correct the folding defect imparted by point mutations. These bind to the inward facing state. The most effective compound is noribogaine, the metabolite of ibogaine (an alkaloid first isolated from the shrub Tabernanthe iboga). There are 13 mutations in the human dopamine transporter (DAT, SLC6A3), which give rise to a syndrome of infantile Parkinsonism and dystonia. We capitalized on our insights to explore, if the disease-relevant mutant proteins were amenable to pharmacological correction. Drosopohila melanogaster, which lack the dopamine transporter, are hyperactive and sleepless (fumin in Japanese). Thus, mutated human DAT variants can be introduced into fumin flies. This allows for examining the effect of pharmacochaperones on delivery of DAT to the axonal territory and on restoring sleep. We explored the chemical space populated by variations of the ibogaine structure to identify an analogue (referred to as compound 9b), which was highly effective: compound 9b also restored folding in DAT variants, which were not amenable to rescue by noribogaine. Deficiencies in the human creatine transporter-1 (CrT1, SLC6A8) give rise to a syndrome of intellectual disability and seizures and accounts for 5% of genetically based intellectual disabilities in boys. Point mutations occur, in part, at positions, which are homologous to those of folding-deficient DAT variants. CrT1 lacks the rich pharmacology of monoamine transporters. Nevertheless, our insights are also applicable to rescuing some disease-related variants of CrT1. Finally, the question arises how one can address the folding problem. We propose a two-pronged approach: (i) analyzing the effect of mutations on the transport cycle by electrophysiological recordings; this allows for extracting information on the rates of conformational transitions. The underlying assumption posits that - even when remedied by pharmacochaperoning - folding-deficient mutants must differ in the conformational transitions associated with the transport cycle. (ii) analyzing the effect of mutations on the two components of protein stability, i.e. thermodynamic and kinetic stability. This is expected to provide a glimpse of the energy landscape, which governs the folding trajectory.
Combining two mechanisms to produce neural firing rate homeostasis
The typical goal of homeostatic mechanisms is to ensure a system operates at or in the vicinity of a stable set point, where a particular measure is relatively constant and stable. Neural firing rate homeostasis is unusual in that a set point of fixed firing rate is at odds with the goal of a neuron to convey information, or produce timed motor responses, which require temporal variations in firing rate. Therefore, for a neuron, a range of firing rates is required for optimal function, which could, for example, be set by a dual system that controls both mean and variance of firing rate. We explore, both via simulations and analysis, how two experimentally measured mechanisms for firing rate homeostasis can cooperate to improve information processing and avoid the pitfall of pulling in different directions when their set points do not appear to match.
Co-tuned, balanced excitation and inhibition in olfactory memory networks
Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.
Understanding "why": The role of causality in cognition
Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.
Modularity of attractors in inhibition-dominated TLNs
Threshold-linear networks (TLNs) display a wide variety of nonlinear dynamics including multistability, limit cycles, quasiperiodic attractors, and chaos. Over the past few years, we have developed a detailed mathematical theory relating stable and unstable fixed points of TLNs to graph-theoretic properties of the underlying network. In particular, we have discovered that a special type of unstable fixed points, corresponding to "core motifs," are predictive of dynamic attractors. Recently, we have used these ideas to classify dynamic attractors in a two-parameter family of inhibition-dominated TLNs spanning all 9608 directed graphs of size n=5. Remarkably, we find a striking modularity in the dynamic attractors, with identical or near-identical attractors arising in networks that are otherwise dynamically inequivalent. This suggests that, just as one can store multiple static patterns as stable fixed points in a Hopfield model, a variety of dynamic attractors can also be embedded in a TLN in a modular fashion.
Precision and Temporal Stability of Directionality Inferences from Group Iterative Multiple Model Estimation (GIMME) Brain Network Models
The Group Iterative Multiple Model Estimation (GIMME) framework has emerged as a promising method for characterizing connections between brain regions in functional neuroimaging data. Two of the most appealing features of this framework are its ability to estimate the directionality of connections between network nodes and its ability to determine whether those connections apply to everyone in a sample (group-level) or just to one person (individual-level). However, there are outstanding questions about the validity and stability of these estimates, including: 1) how recovery of connection directionality is affected by features of data sets such as scan length and autoregressive effects, which may be strong in some imaging modalities (resting state fMRI, fNIRS) but weaker in others (task fMRI); and 2) whether inferences about directionality at the group and individual levels are stable across time. This talk will provide an overview of the GIMME framework and describe relevant results from a large-scale simulation study that assesses directionality recovery under various conditions and a separate project that investigates the temporal stability of GIMME’s inferences in the Human Connectome Project data set. Analyses from these projects demonstrate that estimates of directionality are most precise when autoregressive and cross-lagged relations in the data are relatively strong, and that inferences about the directionality of group-level connections, specifically, appear to be stable across time. Implications of these findings for the interpretation of directional connectivity estimates in different types of neuroimaging data will be discussed.
Stability-Flexibility Dilemma in Cognitive Control: A Dynamical System Perspective
Constraints on control-dependent processing have become a fundamental concept in general theories of cognition that explain human behavior in terms of rational adaptations to these constraints. However, theories miss a rationale for why such constraints would exist in the first place. Recent work suggests that constraints on the allocation of control facilitate flexible task switching at the expense of the stability needed to support goal-directed behavior in face of distraction. We formulate this problem in a dynamical system, in which control signals are represented as attractors and in which constraints on control allocation limit the depth of these attractors. We derive formal expressions of the stability-flexibility tradeoff, showing that constraints on control allocation improve cognitive flexibility but impair cognitive stability. We provide evidence that human participants adapt higher constraints on the allocation of control as the demand for flexibility increases but that participants deviate from optimal constraints. In continuing work, we are investigating how collaborative performance of a group of individuals can benefit from individual differences defined in terms of balance between cognitive stability and flexibility.
Early constipation predicts faster dementia onset in Parkinson’s disease
Constipation is a common but not a universal feature in early PD, suggesting that gut involvement is heterogeneous and may be part of a distinct PD subtype with prognostic implications. We analysed data from the Parkinson’s Incidence Cohorts Collaboration, composed of incident community-based cohorts of PD patients assessed longitudinally over 8 years. Constipation was assessed with the MDS-UPDRS constipation item or a comparable categorical scale. Primary PD outcomes of interest were dementia, postural instability and death. PD patients were stratified according to constipation severity at diagnosis: none (n=313, 67.3%), minor (n=97, 20.9%) and major (n=55, 11.8%). Clinical progression to all 3 outcomes was more rapid in those with more severe constipation at baseline (Kaplan Meier survival analysis). Cox regression analysis, adjusting for relevant confounders, confirmed a significant relationship between constipation severity and progression to dementia, but not postural instability or death. Early constipation may predict an accelerated progression of neurodegenerative pathology. Conclusions: We show widespread cortical and subcortical grey matter micro-structure associations with schizophrenia PRS. Across all investigated phenotypes NDI, a measure of the density of myelinated axons and dendrites, showed the most robust associations with schizophrenia PRS. We interpret these results as indicative of reduced density of myelinated axons and dendritic arborization in large-scale cortico-subcortical networks mediating the genetic risk for schizophrenia.
Restless engrams: the origin of continually reconfiguring neural representations
During learning, populations of neurons alter their connectivity and activity patterns, enabling the brain to construct a model of the external world. Conventional wisdom holds that the durability of a such a model is reflected in the stability of neural responses and the stability of synaptic connections that form memory engrams. However, recent experimental findings have challenged this idea, revealing that neural population activity in circuits involved in sensory perception, motor planning and spatial memory continually change over time during familiar behavioural tasks. This continual change suggests significant redundancy in neural representations, with many circuit configurations providing equivalent function. I will describe recent work that explores the consequences of such redundancy for learning and for task representation. Despite large changes in neural activity, we find cortical responses in sensorimotor tasks admit a relatively stable readout at the population level. Furthermore, we find that redundancy in circuit connectivity can make a task easier to learn and compensate for deficiencies in biological learning rules. Finally, if neuronal connections are subject to an unavoidable level of turnover, the level of plasticity required to optimally maintain a memory is generally lower than the total change due to turnover itself, predicting continual reconfiguration of an engram.
Physics of epithelial lumen stability: How do cells build negative space?
Firing Homeostasis in Neural Circuits: From Basic Principles to Malfunctions
Neural circuit functions are stabilized by homeostatic mechanisms at long timescales in response to changes in experience and learning. However, we still do not know which specific physiological variables are being stabilized, nor which cellular or neural-network components comprise the homeostatic machinery. At this point, most evidence suggests that the distribution of firing rates amongst neurons in a brain circuit is the key variable that is maintained around a circuit-specific set-point value in a process called firing rate homeostasis. Here, I will discuss our recent findings that implicate mitochondria as a central player in mediating firing rate homeostasis and its impairments. While mitochondria are known to regulate neuronal variables such as synaptic vesicle release or intracellular calcium concentration, we searched for the mitochondrial signaling pathways that are essential for homeostatic regulation of firing rates. We utilize basic concepts of control theory to build a framework for classifying possible components of the homeostatic machinery in neural networks. This framework may facilitate the identification of new homeostatic pathways whose malfunctions drive instability of neural circuits in distinct brain disorders.
Algorithmic advances in face matching: Stability of tests in atypical groups
Face matching tests have traditionally been developed to assess human face perception in the neurotypical range, but methods that underlie their development often make it difficult for these measures to be applied in atypical populations (developmental prosopagnosics, super recognizers) due to unadjusted difficulty. We have recently presented the development of the Oxford Face Matching Test, a measure that bases individual item-difficulty on algorithmically derived similarity of presented stimuli. The measure seems useful as it can be given online or in-laboratory, has good discriminability and high test-retest reliability in the neurotypical groups. In addition, it has good validity in separating atypical groups at either of the spectrum ends. In this talk, I examine the stability of the OFMT and other traditionally used measures in atypical groups. On top of the theoretical significance of determining whether reliability of tests is equivalent in atypical population, this is an important question because of the practical concerns of retesting the same participants across different lab groups. Theoretical and practical implications for further test development and data sharing are discussed.
Glassy phase in dynamically balanced networks
We study the dynamics of (inhibitory) balanced networks at varying (i) the level of symmetry in the synaptic connectivity; and (ii) the ariance of the synaptic efficacies (synaptic gain). We find three regimes of activity. For suitably low synaptic gain, regardless of the level of symmetry, there exists a unique stable fixed point. Using a cavity-like approach, we develop a quantitative theory that describes the statistics of the activity in this unique fixed point, and the conditions for its stability. Increasing the synaptic gain, the unique fixed point destabilizes, and the network exhibits chaotic activity for zero or negative levels of symmetry (i.e., random or antisymmetric). Instead, for positive levels of symmetry, there is multi-stability among a large number of marginally stable fixed points. In this regime, ergodicity is broken and the network exhibits non-exponential relaxational dynamics. We discuss the potential relevance of such a “glassy” phase to explain some features of cortical activity.
Deciphering the Dynamics of the Unconscious Brain Under General Anesthesia
General anesthesia is a drug-induced, reversible condition comprised of five behavioral states: unconsciousness, amnesia (loss of memory), antinociception (loss of pain sensation), akinesia (immobility), and hemodynamic stability with control of the stress response. Our work shows that a primary mechanism through which anesthetics create these altered states of arousal is by initiating and maintaining highly structured oscillations. These oscillations impair communication among brain regions. We illustrate this effect by presenting findings from our human studies of general anesthesia using high-density EEG recordings and intracranial recordings. These studies have allowed us to give a detailed characterization of the neurophysiology of loss and recovery of consciousness due to propofol. We show how these dynamics change systematically with different anesthetic classes and with age. As a consequence, we have developed a principled, neuroscience-based paradigm for using the EEG to monitor the brain states of patients receiving general anesthesia. We demonstrate that the state of general anesthesia can be rapidly reversed by activating specific brain circuits. Finally, we demonstrate that the state of general anesthesia can be controlled using closed loop feedback control systems. The success of our research has depended critically on tight coupling of experiments, signal processing research and mathematical modeling.
Neural heterogeneity promotes robust learning
The brain has a hugely diverse, heterogeneous structure. By contrast, many functional neural models are homogeneous. We compared the performance of spiking neural networks trained to carry out difficult tasks, with varying degrees of heterogeneity. Introducing heterogeneity in membrane and synapse time constants substantially improved task performance, and made learning more stable and robust across multiple training methods, particularly for tasks with a rich temporal structure. In addition, the distribution of time constants in the trained networks closely matches those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.
What is serially-dependent perception good for?
Perception can be strongly serially-dependent (i.e. biased toward previously seen stimuli). Recently, serial dependencies in perception were proposed as a mechanism for perceptual stability, increasing the apparent continuity of the complex environments we experience in everyday life. For example, stable scene perception can be actively achieved by the visual system through global serial dependencies, a special kind of serial dependence between summary statistical representations. Serial dependence occurs also between emotional expressions, but it is highly selective for the same identity. Overall, these results further support the notion of serial dependence as a global, highly specialized, and purposeful mechanism. However, serial dependence could also be a deleterious phenomenon in unnatural or unpredictable situations, such as visual search in radiological scans, biasing current judgments toward previous ones even when accurate and unbiased perception is needed. For example, observers make consistent perceptual errors when classifying a tumor- like shape on the current trial, seeing it as more similar to the shape presented on the previous trial. In a separate localization test, observers make consistent errors when reporting the perceived position of an objects on the current trial, mislocalizing it toward the position in the preceding trial. Taken together, these results show two opposite sides of serial dependence; it can be a beneficial mechanism which promotes perceptual stability, but at the same time a deleterious mechanism which impairs our percept when fine recognition is needed.
Collective Ecophysiology and Physics of Social Insects
Collective behavior of organisms creates environmental micro-niches that buffer them from environmental fluctuations e.g., temperature, humidity, mechanical perturbations, etc., thus coupling organismal physiology, environmental physics, and population ecology. This talk will focus on a combination of biological experiments, theory, and computation to understand how a collective of bees can integrate physical and behavioral cues to attain a non-equilibrium steady state that allows them to resist and respond to environmental fluctuations of forces and flows. We analyze how bee clusters change their shape and connectivity and gain stability by spread-eagling themselves in response to mechanical perturbations. Similarly, we study how bees in a colony respond to environmental thermal perturbations by deploying a fanning strategy at the entrance that they use to create a forced ventilation stream that allows the bees to collectively maintain a constant hive temperature. When combined with quantitative analysis and computations in both systems, we integrate the sensing of the environmental cues (acceleration, temperature, flow) and convert them to behavioral outputs that allow the swarms to achieve a dynamic homeostasis.
Imposed flow in active liquid crystals
Inspired by ongoing experiments on three dimensional active gels composed of sliding microtubule bundles, we study a few idealized problems in a minimal hydrodynamic model for active liquid crystals. Our aim is to use flow to determine the value of the coefficient of activity in a continuum theory. We consider the case of apolar active particles that form a disordered phase in the absence of flow, and study how activity affects the swimming speed of a prescribed swimmer, as well as the stability of a fluid interface. We also consider flows of active matter in channels or past immersed objects.
Targeting the synapse in Alzheimer’s Disease
Alzheimer’s Disease is characterised by the accumulation of misfolded proteins, namely amyloid and tau, however it is synapse loss which leads to the cognitive impairments associated with the disease. Many studies have focussed on single time points to determine the effects of pathology on synapses however this does not inform on the plasticity of the synapses, that is how they behave in vivo as the pathology progresses. Here we used in vivo two-photon microscopy to assess the temporal dynamics of axonal boutons and dendritic spines in mouse models of tauopathy[1] (rTg4510) and amyloidopathy[2] (J20). This revealed that pre- and post-synaptic components are differentially affected in both AD models in response to pathology. In the Tg4510 model, differences in the stability and turnover of axonal boutons and dendritic spines immediately prior to neurite degeneration was revealed. Moreover, the dystrophic neurites could be partially rescued by transgene suppression. Understanding the imbalance in the response of pre- and post-synaptic components is crucial for drug discovery studies targeting the synapse in Alzheimer’s Disease. To investigate how sub-types of synapses are affected in human tissue, the Multi-‘omics Atlas Project, a UKDRI initiative to comprehensively map the pathology in human AD, will determine the synaptome changes using imaging and synaptic proteomics in human post mortem AD tissue. The use of multiple brain regions and multiple stages of disease will enable a pseudotemporal profile of pathology and the associated synapse alterations to be determined. These data will be compared to data from preclinical models to determine the functional implications of the human findings, to better inform preclinical drug discovery studies and to develop a therapeutic strategy to target synapses in Alzheimer’s Disease[3].
Multistable structures - from deployable structures to robots
Multistable structures can reversibly change between multiple stable configurations when a sufficient energetic input is provided. While originally the field focused on understanding what governs the snapping, more recently it has been shown that these systems also provide a powerful platform to design a wide range of smart structures. In this talk, I will first show that pressure-deployable origami structures characterized by two stable configurations provide opportunities for a new generation of large-scale inflatable structures that lock in place after deployment and provide a robust enclosure through their rigid faces. Then, I will demonstrate that the propagation of transition waves in a bistable one-dimensional linkage can be exploited as a robust mechanism to realize structures that can be quickly deployed. Finally, while in the first two examples multistability is harnessed to realize deployable architectures, I will demonstrate that bistable building blocks can also be exploited to design crawling and jumping robots. Unlike previously proposed robots that require complex input control of multiple actuators, a simple, slow input signal suffices to make our system move, as all features required for locomotion are embedded into the architecture of the building blocks.
A robust neural integrator based on the interactions of three time scales
Neural integrators are circuits that are able to code analog information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights. In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in recordings of animals. I show the mechanism for this can be explained by the instability regions of the Mathieu equation. I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.
A balancing act: goal-oriented control of stability reflexes by visual feedback
During the course of an animal’s interaction with its environments, activity within central neural circuits is orchestrated exquisitely to structure goal-oriented movement. During walking, for example, the head, body and limbs are coordinated in distinctive ways that are guided by the task at play, and also by posture and balance requirements. Hence, the overall performance of goal-oriented walking depends on the interplay between task-specific motor plans and stability reflexes. Copies of motor plans, typically described by the term efference copy, modulate stability reflexes in a predictive manner. However, the highly uncertain nature of natural environments indicates that the effect of efferent copy on movement control is insufficient; additional mechanisms must exist to regulate stability reflexes and coordinate motor programs flexibly under non-predictable conditions. In this talk, I will discuss our recent work examining how self-generated visual signals orchestrate the interplay between task-specific motor plans and stability reflexes during a self-paced, goal-oriented walking behavior.
Neurotoxicity is a major health problem in Africa: focus on Parkinson's / Parkinsonism
Parkinson's disease (PD) is the second most present neurodegenerative disease in the world after Alzheimer's. It is due to the progressive and irreversible loss of dopaminergic neurons of the substantia nigra Pars Compacta. Alpha synuclein deposits and the appearance of Lewi bodies are systematically associated with it. PD is characterized by four cardinal motor symptoms: bradykinesia / akinesia, rigidity, postural instability and tremors at rest. These symptoms appear when 80% of the dopaminergic endings disappear in the striatum. According to Braak's theory, non-motor symptoms appear much earlier and this is particularly the case with anxiety, depression, anhedonia, and sleep disturbances. In 90 to 95% of cases, the causes of the appearance of the disease remain unknown, but polluting toxic molecules are incriminated more and more. In Africa, neurodegenerative diseases of the Parkinson's type are increasingly present and a parallel seems to exist between the increase in cases and the presence of toxic and polluting products such as metals. My Web conference will focus on this aspect, i.e. present experimental arguments which reinforce the hypothesis of the incrimination of these pollutants in the incidence of Parkinson's disease and / or Parkinsonism. Among the lines of research that we have developed in my laboratory in Rabat, Morocco, I have chosen this one knowing that many of our PhD students and IBRO Alumni are working or trying to develop scientific research on neurotoxicity in correlation with pathologies of the brain.
Theory of gating in recurrent neural networks
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) for processing sequential data, and also in neuroscience, to understand the emergent properties of networks of real neurons. Prior theoretical work in understanding the properties of RNNs has focused on models with additive interactions. However, real neurons can have gating i.e. multiplicative interactions, and gating is also a central feature of the best performing RNNs in machine learning. Here, we develop a dynamical mean-field theory (DMFT) to study the consequences of gating in RNNs. We use random matrix theory to show how gating robustly produces marginal stability and line attractors – important mechanisms for biologically-relevant computations requiring long memory. The long-time behavior of the gated network is studied using its Lyapunov spectrum, and the DMFT is used to provide a novel analytical expression for the maximum Lyapunov exponent demonstrating its close relation to relaxation-time of the dynamics. Gating is also shown to give rise to a novel, discontinuous transition to chaos, where the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity), contrary to a seminal result for additive RNNs. Critical surfaces and regions of marginal stability in the parameter space are indicated in phase diagrams, thus providing a map for principled parameter choices for ML practitioners. Finally, we develop a field-theory for gradients that arise in training, by incorporating the adjoint sensitivity framework from control theory in the DMFT. This paves the way for the use of powerful field-theoretic techniques to study training/gradients in large RNNs.
Bimodal multistability during perceptual detection in the ventral premotor cortex
Bernstein Conference 2024
Bistability at the cellular level promotes robust and tunable criticality at the circuit level
Bernstein Conference 2024
The space of high-dimensional Dalean, amplifying networks and the trade-off between stability and amplification
Bernstein Conference 2024
Paradoxical effect of exercise on the long-term stability of hippocampal place code
COSYNE 2022
Paradoxical effect of exercise on the long-term stability of hippocampal place code
COSYNE 2022
Revisiting the flexibility-stability dilemma in recurrent networks using a multiplicative plasticity rule
COSYNE 2022
Revisiting the flexibility-stability dilemma in recurrent networks using a multiplicative plasticity rule
COSYNE 2022
Differential Stability of Task Variable Representations in Retrosplenial Cortex
COSYNE 2023
Inhibitory control of plasticity promotes stability and competitive learning in recurrent networks
COSYNE 2023
Maturing neurons and dual structural plasticity enable flexibility and stability of olfactory memory
COSYNE 2023
Compartment-specific stability in CA3 pyramidal neuron dendrites revealed by automatic segmentation
COSYNE 2025
Invariant synaptic density across species links functional stability and wiring optimization principles
COSYNE 2025
Memory as a byproduct of stability through hysteresis: Distilling meta-learned plasticity rules
COSYNE 2025
Network Gain Regulates Stability and Flexibility in a Ring Attractor Network
COSYNE 2025
Stability of spatial maps in CA3 axons under affective contextual changes
COSYNE 2025
Amplification abounds, but not without a toll: The trade-off between stability and amplification in Dalean networks
FENS Forum 2024
Assessment of students' quality of life and attentional stability in emergency situations
FENS Forum 2024
Brain-wide microstrokes affect the stability of memory circuits in the hippocampus
FENS Forum 2024
Breakdown of bistability in cortical synchronization dynamics characterizes early stages of Alzheimer’s disease
FENS Forum 2024
Characterising the role of Gadd45ɑ in mRNA stability in the context of synaptic plasticity
FENS Forum 2024
Comparative proteomic profiling to identify mechanisms governing nervous system stability in neurodegenerative disease
FENS Forum 2024
Functional stability and recurrent STDP in rhythmogenesis
FENS Forum 2024
Instability of orientation coding in mouse primary visual cortex during a visual oddball task
FENS Forum 2024
Stability of task representations in mouse mPFC across different behaviors
FENS Forum 2024
Spatial and temporal stability of neuronal representations in a rat model of OCD
FENS Forum 2024
Stability of hypothalamic neural population activity during sleep-wake states
FENS Forum 2024
Stability of social bonds over time: Measured in mice tested under semi-naturalistic conditions
FENS Forum 2024
VEGFD signaling balances stability and activity-dependent structural plasticity of dendrites
FENS Forum 2024
Vigilance-state instability precedes state-specific LFP changes in acute sepsis
FENS Forum 2024
Whole brain mapping of engram distribution and stability
FENS Forum 2024