Cluster
cluster
Neural mechanisms of optimal performance
When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.
Learning and Memory
This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.
The Role of Cognitive Appraisal in the Relationship between Personality and Emotional Reactivity
Emotion is defined as a rapid psychological process involving experiential, expressive and physiological responses. These emerge following an appraisal process that involves cognitive evaluations of the environment assessing its relevance, implication, coping potential, and normative significance. It has been suggested that changes in appraisal processes lead to changes in the resulting emotional nature. Simultaneously, it was demonstrated that personality can be seen as a predisposition to feel more frequently certain emotions, but the personality-appraisal-emotional response chain is rarely fully investigated. The present project thus sought to investigate the extent to which personality traits influence certain appraisals, which in turn influence the subsequent emotional reactions via a systematic analysis of the link between personality traits of different current models, specific appraisals, and emotional response patterns at the experiential, expressive, and physiological levels. Major results include the coherence of emotion components clustering, and the centrality of the pleasantness, coping potential and consequences appraisals, in context; and the differentiated mediating role of cognitive appraisal in the relation between personality and the intensity and duration of an emotional state, and autonomic arousal, such as Extraversion-pleasantness-experience, and Neuroticism-powerlessness-arousal. Elucidating these relationships deepens our understanding of individual differences in emotional reactivity and spot routes of action on appraisal processes to modify upcoming adverse emotional responses, with a broader societal impact on clinical and non-clinical populations.
Maintaining Plasticity in Neural Networks
Nonstationarity presents a variety of challenges for machine learning systems. One surprising pathology which can arise in nonstationary learning problems is plasticity loss, whereby making progress on new learning objectives becomes more difficult as training progresses. Networks which are unable to adapt in response to changes in their environment experience plateaus or even declines in performance in highly non-stationary domains such as reinforcement learning, where the learner must quickly adapt to new information even after hundreds of millions of optimization steps. The loss of plasticity manifests in a cluster of related empirical phenomena which have been identified by a number of recent works, including the primacy bias, implicit under-parameterization, rank collapse, and capacity loss. While this phenomenon is widely observed, it is still not fully understood. This talk will present exciting recent results which shed light on the mechanisms driving the loss of plasticity in a variety of learning problems and survey methods to maintain network plasticity in non-stationary tasks, with a particular focus on deep reinforcement learning.
Are integrative, multidisciplinary, and pragmatic models possible? The #PsychMapping experience
This presentation delves into the necessity for simplified models in the field of psychological sciences to cater to a diverse audience of practitioners. We introduce the #PsychMapping model, evaluate its merits and limitations, and discuss its place in contemporary scientific culture. The #PsychMapping model is the product of an extensive literature review, initially within the realm of sport and exercise psychology and subsequently encompassing a broader spectrum of psychological sciences. This model synthesizes the progress made in psychological sciences by categorizing variables into a framework that distinguishes between traits (e.g., body structure and personality) and states (e.g., heart rate and emotions). Furthermore, it delineates internal traits and states from the externalized self, which encompasses behaviour and performance. All three components—traits, states, and the externalized self—are in a continuous interplay with external physical, social, and circumstantial factors. Two core processes elucidate the interactions among these four primary clusters: external perception, encompassing the mechanism through which external stimuli transition into internal events, and self-regulation, which empowers individuals to become autonomous agents capable of exerting control over themselves and their actions. While the model inherently oversimplifies intricate processes, the central question remains: does its pragmatic utility outweigh its limitations, and can it serve as a valuable tool for comprehending human behaviour?
Are integrative, multidisciplinary, and pragmatic models possible? The #PsychMapping experience
This presentation delves into the necessity for simplified models in the field of psychological sciences to cater to a diverse audience of practitioners. We introduce the #PsychMapping model, evaluate its merits and limitations, and discuss its place in contemporary scientific culture. The #PsychMapping model is the product of an extensive literature review, initially within the realm of sport and exercise psychology and subsequently encompassing a broader spectrum of psychological sciences. This model synthesizes the progress made in psychological sciences by categorizing variables into a framework that distinguishes between traits (e.g., body structure and personality) and states (e.g., heart rate and emotions). Furthermore, it delineates internal traits and states from the externalized self, which encompasses behaviour and performance. All three components—traits, states, and the externalized self—are in a continuous interplay with external physical, social, and circumstantial factors. Two core processes elucidate the interactions among these four primary clusters: external perception, encompassing the mechanism through which external stimuli transition into internal events, and self-regulation, which empowers individuals to become autonomous agents capable of exerting control over themselves and their actions. While the model inherently oversimplifies intricate processes, the central question remains: does its pragmatic utility outweigh its limitations, and can it serve as a valuable tool for comprehending human behaviour?
Current and future trends in neuroimaging
With the advent of several different fMRI analysis tools and packages outside of the established ones (i.e., SPM, AFNI, and FSL), today's researcher may wonder what the best practices are for fMRI analysis. This talk will discuss some of the recent trends in neuroimaging, including design optimization and power analysis, standardized analysis pipelines such as fMRIPrep, and an overview of current recommendations for how to present neuroimaging results. Along the way we will discuss the balance between Type I and Type II errors with different correction mechanisms (e.g., Threshold-Free Cluster Enhancement and Equitable Thresholding and Clustering), as well as considerations for working with large open-access databases.
Mathematical and computational modelling of ocular hemodynamics: from theory to applications
Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss a recent paper by Taylor et al. (2023): https://www.sciencedirect.com/science/article/pii/S1053811923002896. They discuss the merits of highlighting results instead of hiding them; that is, clearly marking which voxels and clusters pass a given significance threshold, but still highlighting sub-threshold results, with opacity proportional to the strength of the effect. They use this to illustrate how there in fact may be more agreement between researchers than previously thought, using the NARPS dataset as an example. By adopting a continuous, "highlighted" approach, it becomes clear that the majority of effects are in the same location and that the effect size is in the same direction, compared to an approach that only permits rejecting or not rejecting the null hypothesis. We will also talk about the implications of this approach for creating figures, detecting artifacts, and aiding reproducibility.
Diverse applications of artificial intelligence and mathematical approaches in ophthalmology
Ophthalmology is ideally placed to benefit from recent advances in artificial intelligence. It is a highly image-based specialty and provides unique access to the microvascular circulation and the central nervous system. This talk will demonstrate diverse applications of machine learning and deep learning techniques in ophthalmology, including in age-related macular degeneration (AMD), the leading cause of blindness in industrialized countries, and cataract, the leading cause of blindness worldwide. This will include deep learning approaches to automated diagnosis, quantitative severity classification, and prognostic prediction of disease progression, both from images alone and accompanied by demographic and genetic information. The approaches discussed will include deep feature extraction, label transfer, and multi-modal, multi-task training. Cluster analysis, an unsupervised machine learning approach to data classification, will be demonstrated by its application to geographic atrophy in AMD, including exploration of genotype-phenotype relationships. Finally, mediation analysis will be discussed, with the aim of dissecting complex relationships between AMD disease features, genotype, and progression.
Distinct contributions of different anterior frontal regions to rule-guided decision-making in primates: complementary evidence from lesions, electrophysiology, and neurostimulation
Different prefrontal areas contribute in distinctly different ways to rule-guided behaviour in the context of a Wisconsin Card Sorting Test (WCST) analog for macaques. For example, causal evidence from circumscribed lesions in NHPs reveals that dorsolateral prefrontal cortex (dlPFC) is necessary to maintain a reinforced abstract rule in working memory, orbitofrontal cortex (OFC) is needed to rapidly update representations of rule value, and the anterior cingulate cortex (ACC) plays a key role in cognitive control and integrating information for correct and incorrect trials over recent outcomes. Moreover, recent lesion studies of frontopolar cortex (FPC) suggest it contributes to representing the relative value of unchosen alternatives, including rules. Yet we do not understand how these functional specializations relate to intrinsic neuronal activities nor the extent to which these neuronal activities differ between different prefrontal regions. After reviewing the aforementioned causal evidence I will present our new data from studies using multi-area multi-electrode recording techniques in NHPs to simultaneously record from four different prefrontal regions implicated in rule-guided behaviour. Multi-electrode micro-arrays (‘Utah arrays’) were chronically implanted in dlPFC, vlPFC, OFC, and FPC of two macaques, allowing us to simultaneously record single and multiunit activity, and local field potential (LFP), from all regions while the monkey performs the WCST analog. Rule-related neuronal activity was widespread in all areas recorded but it differed in degree and in timing between different areas. I will also present preliminary results from decoding analyses applied to rule-related neuronal activities both from individual clusters and also from population measures. These results confirm and help quantify dynamic task-related activities that differ between prefrontal regions. We also found task-related modulation of LFPs within beta and gamma bands in FPC. By combining this correlational recording methods with trial-specific causal interventions (electrical microstimulation) to FPC we could significantly enhance and impair animals performance in distinct task epochs in functionally relevant ways, further consistent with an emerging picture of regional functional specialization within a distributed framework of interacting and interconnected cortical regions.
Learning through the eyes and ears of a child
Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.
Shallow networks run deep: How peripheral preprocessing facilitates odor classification
Drosophila olfactory sensory hairs ("sensilla") typically house two olfactory receptor neurons (ORNs) which can laterally inhibit each other via electrical ("ephaptic") coupling. ORN pairing is highly stereotyped and genetically determined. Thus, olfactory signals arriving in the Antennal Lobe (AL) have been pre-processed by a fixed and shallow network at the periphery. To uncover the functional significance of this organization, we developed a nonlinear phenomenological model of asymmetrically coupled ORNs responding to odor mixture stimuli. We derived an analytical solution to the ORNs’ dynamics, which shows that the peripheral network can extract the valence of specific odor mixtures via transient amplification. Our model predicts that for efficient read-out of the amplified valence signal there must exist specific patterns of downstream connectivity that reflect the organization at the periphery. Analysis of AL→Lateral Horn (LH) fly connectomic data reveals evidence directly supporting this prediction. We further studied the effect of ephaptic coupling on olfactory processing in the AL→Mushroom Body (MB) pathway. We show that stereotyped ephaptic interactions between ORNs lead to a clustered odor representation of glomerular responses. Such clustering in the AL is an essential assumption of theoretical studies on odor recognition in the MB. Together our work shows that preprocessing of olfactory stimuli by a fixed and shallow network increases sensitivity to specific odor mixtures, and aids in the learning of novel olfactory stimuli. Work led by Palka Puri, in collaboration with Chih-Ying Su and Shiuan-Tze Wu.
A multi-level account of hippocampal function in concept learning from behavior to neurons
A complete neuroscience requires multi-level theories that address phenomena ranging from higher-level cognitive behaviors to activities within a cell. Unfortunately, we don't have cognitive models of behavior whose components can be decomposed into the neural dynamics that give rise to behavior, leaving an explanatory gap. Here, we decompose SUSTAIN, a clustering model of concept learning, into neuron-like units (SUSTAIN-d; decomposed). Instead of abstract constructs (clusters), SUSTAIN-d has a pool of neuron-like units. With millions of units, a key challenge is how to bridge from abstract constructs such as clusters to neurons, whilst retaining high-level behavior. How does the brain coordinate neural activity during learning? Inspired by algorithms that capture flocking behavior in birds, we introduce a neural flocking learning rule to coordinate units that collectively form higher-level mental constructs ("virtual clusters"), neural representations (concept, place and grid cell-like assemblies), and parallels recurrent hippocampal activity. The decomposed model shows how brain-scale neural populations coordinate to form assemblies encoding concept and spatial representations, and why many neurons are required for robust performance. Our account provides a multi-level explanation for how cognition and symbol-like representations are supported by coordinated neural assemblies formed through learning.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
How do protein-RNA condensates form and contribute to disease?
In recent years, it has become clear that intrinsically disordered regions (IDRs) of RBPs, and the structure of RNAs, often contribute to the condensation of RNPs. To understand the transcriptomic features of such RNP condensates, we’ve used an improved individual nucleotide resolution CLIP protocol (iiCLIP), which produces highly sensitive and specific data, and thus enables quantitative comparisons of interactions across conditions (Lee et al., 2021). This showed how the IDR-dependent condensation properties of TDP-43 specify its RNA binding and regulatory repertoire (Hallegger et al., 2021). Moreover, we developed software for discovery and visualisation of RNA binding motifs that uncovered common binding patterns of RBPs on long multivalent RNA regions that are composed of dispersed motif clusters (Kuret et al, 2021). Finally, we used hybrid iCLIP (hiCLIP) to characterise the RNA structures mediating the assembly of Staufen RNPs across mammalian brain development, which demonstrated the roles of long-range RNA duplexes in the compaction of long 3’UTRs. I will present how the combined analysis of the characteristics of IDRs in RBPs, multivalent RNA regions and RNA structures is required to understand the formation and functions of RNP condensates, and how they change in diseases.
A transcriptomic axis predicts state modulation of cortical interneurons
Transcriptomics has revealed that cortical inhibitory neurons exhibit a great diversity of fine molecular subtypes, but it is not known whether these subtypes have correspondingly diverse activity patterns in the living brain. We show that inhibitory subtypes in primary visual cortex (V1) have diverse correlates with brain state, but that this diversity is organized by a single factor: position along their main axis of transcriptomic variation. We combined in vivo 2-photon calcium imaging of mouse V1 with a novel transcriptomic method to identify mRNAs for 72 selected genes in ex vivo slices. We classified inhibitory neurons imaged in layers 1-3 into a three-level hierarchy of 5 Subclasses, 11 Types, and 35 Subtypes using previously-defined transcriptomic clusters. Responses to visual stimuli differed significantly only across Subclasses, suppressing cells in the Sncg Subclass while driving cells in the other Subclasses. Modulation by brain state differed at all hierarchical levels but could be largely predicted from the first transcriptomic principal component, which also predicted correlations with simultaneously recorded cells. Inhibitory Subtypes that fired more in resting, oscillatory brain states have less axon in layer 1, narrower spikes, lower input resistance and weaker adaptation as determined in vitro and express more inhibitory cholinergic receptors. Subtypes firing more during arousal had the opposite properties. Thus, a simple principle may largely explain how diverse inhibitory V1 Subtypes shape state-dependent cortical processing.
Learning binds novel inputs into functional synaptic clusters via spinogenesis
Learning is known to induce the formation of new dendritic spines, but despite decades of effort, the functional properties of new spines in vivo remain unknown. Here, using a combination of longitudinal in vivo 2-photon imaging of the glutamate reporter, iGluSnFR, and correlated electron microscopy (CLEM) of dendritic spines on the apical dendrites of L2/3 excitatory neurons in the motor cortex during motor learning, we describe a framework of new spines' formation, survival, and resulting function. Specifically, our data indicate that the potentiation of a subset of clustered, pre-existing spines showing task-related activity in early sessions of learning creates a micro-environment of plasticity within dendrites, wherein multiple filopodia sample the nearby neuropil, form connections with pre-existing boutons connected to allodendritic spines, and are then selected for survival based on co-activity with nearby task-related spines. Thus, the formation and survival of new spines is determined by the functional micro-environment of dendrites. After formation, new spines show preferential co-activation with nearby task-related spines. This synchronous activity is more specific to movements than activation of the individual spines in isolation, and further, is coincident with movements that are more similar to the learned pattern. Thus, new spines functionally engage with their parent clusters to signal the learned movement. Finally, by reconstructing the axons associated with new spines, we found that they synapse with axons previously unrepresented in these dendritic domains, suggesting that the strong local co-activity structure exhibited by new spines is likely not due to axon sharing. Thus, learning involves the binding of new information streams into functional synaptic clusters to subserve the learned behavior.
Towards a Theory of Microbial Ecosystems
A major unresolved question in microbiome research is whether the complex ecological patterns observed in surveys of natural communities can be explained and predicted by fundamental, quantitative principles. Bridging theory and experiment is hampered by the multiplicity of ecological processes that simultaneously affect community assembly and a lack of theoretical tools for modeling diverse ecosystems. Here, I will present a simple ecological model of microbial communities that reproduces large-scale ecological patterns observed across multiple natural and experimental settings including compositional gradients, clustering by environment, diversity/harshness correlations, and nestedness. Surprisingly, our model works despite having a “random metabolisms” and “random consumer preferences”. This raises the natural of question of why random ecosystems can describe real-world experimental data. In the second, more theoretical part of the talk, I will answer this question by showing that when a community becomes diverse enough, it will always self-organize into a stable state whose properties are well captured by a “typical random ecosystems”.
A nonlinear shot noise model for calcium-based synaptic plasticity
Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.
CaImAn: large-scale batch and online analysis of calcium imaging data
Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.
NMC4 Short Talk: The complete connectome of an insect brain
Brains must integrate complex sensory information and compare to past events to generate appropriate behavioral responses. The neural circuit basis of these computations is unclear and the underlying structure unknown. Here, we mapped the comprehensive synaptic wiring diagram of the fruit fly larva brain, which contains 3,013 neurons and 544K synaptic sites. It is the most complete insect connectome to date: 1) Both brain hemispheres are reconstructed, allowing investigation of neural pathways that include contralateral axons, which we found in 37% of brain neurons. 2) All sensory neurons and descending neurons are reconstructed, allowing one to follow signals in an uninterrupted chain—from the sensory periphery, through the brain, to motor neurons in the nerve cord. We developed novel computational tools, allowing us to cluster the brain and investigate how information flows through it. We discovered that feedforward pathways from sensory to descending neurons are multilayered and highly multimodal. Robust feedback was observed at almost all levels of the brain, including descending neurons. We investigated how the brain hemispheres communicate with each other and the nerve cord, leading to identification of novel circuit motifs. This work provides the complete blueprint of a brain and a strong foundation to study the structure-function relationship of neural circuits.
NMC4 Short Talk: Untangling Contributions of Distinct Features of Images to Object Processing in Inferotemporal Cortex
How do humans perceive daily objects of various features and categorize these seemingly intuitive and effortless mental representations? Prior literature focusing on the role of the inferotemporal region (IT) has revealed object category clustering that is consistent with the semantic predefined structure (superordinate, ordinate, subordinate). It has however been debated whether the neural signals in the IT regions are a reflection of such categorical hierarchy [Wen et al.,2018; Bracci et al., 2017]. Visual attributes of images that correlated with semantic and category dimensions may have confounded these prior results. Our study aimed to address this debate by building and comparing models using the DNN AlexNet, to explain the variance in representational dissimilarity matrix (RDM) of neural signals in the IT region. We found that mid and high level perceptual attributes of the DNN model contribute the most to neural RDMs in the IT region. Semantic categories, as in predefined structure, were moderately correlated with mid to high DNN layers (r = [0.24 - 0.36]). Variance partitioning analysis also showed that the IT neural representations were mostly explained by DNN layers, while semantic categorical RDMs brought little additional information. In light of these results, we propose future works should focus more on the specific role IT plays in facilitating the extraction and coding of visual features that lead to the emergence of categorical conceptualizations.
NMC4 Short Talk: Synchronization in the Connectome: Metastable oscillatory modes emerge from interactions in the brain spacetime network
The brain exhibits a rich repertoire of oscillatory patterns organized in space, time and frequency. However, despite ever more-detailed characterizations of spectrally-resolved network patterns, the principles governing oscillatory activity at the system-level remain unclear. Here, we propose that the transient emergence of spatially organized brain rhythms are signatures of weakly stable synchronization between subsets of brain areas, naturally occurring at reduced collective frequencies due to the presence of time delays. To test this mechanism, we build a reduced network model representing interactions between local neuronal populations (with damped oscillatory response at 40Hz) coupled in the human neuroanatomical network. Following theoretical predictions, weakly stable cluster synchronization drives a rich repertoire of short-lived (or metastable) oscillatory modes, whose frequency inversely depends on the number of units, the strength of coupling and the propagation times. Despite the significant degree of reduction, we find a range of model parameters where the frequencies of collective oscillations fall in the range of typical brain rhythms, leading to an optimal fit of the power spectra of magnetoencephalographic signals from 89 heathy individuals. These findings provide a mechanistic scenario for the spontaneous emergence of frequency-specific long-range phase-coupling observed in magneto- and electroencephalographic signals as signatures of resonant modes emerging in the space-time structure of the Connectome, reinforcing the importance of incorporating realistic time delays in network models of oscillatory brain activity.
Reflex Regulation of Innate Immunity
Reflex circuits in the nervous system integrate changes in the environment with physiology. Compact clusters of brain neuron cell bodies, termed nuclei, are essential for receiving sensory input and for transmitting motor outputs to the body. These nucelii are critical relay stations which process incoming information and convert these signals to outgoing action potentials which regulate immune system functions. Thus, reflex neural circuits maintain parameters of immunological physiology within a narrow range optimal for health. Advances in neuroscience and immunology using optogenetics, pharmacogenetics, and functional mapping offer a new understanding of the importance of neural circuitry underlying immunity, and offer direct paths to new therapies.
Self-organized formation of discrete grid cell modules from smooth gradients
Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.
Imaging neuronal morphology and activity pattern in developing cerebral cortex layer 4
Establishment of precise neuronal connectivity in the neocortex relies on activity-dependent circuit reorganization during postnatal development. In the mouse somatosensory cortex layer 4, barrels are arranged in one-to-one correspondence to whiskers on the face. Thalamocortical axon termini are clustered in the center of each barrel. The layer 4 spiny stellate neurons are located around the barrel edge, extend their dendrites primarily toward the barrel center, and make synapses with thalamocortical axons corresponding to a single whisker. These organized circuits are established during the first postnatal week through activity-dependent refinement processes. However, activity pattern regulating the circuit formation is still elusive. Using two-photon calcium imaging in living neonatal mice, we found that layer 4 neurons within the same barrel fire synchronously in the absence of peripheral stimulation, creating a ''patchwork'' pattern of spontaneous activity corresponding to the barrel map. We also found that disruption of GluN1, an obligatory subunit of the N-methyl-D-aspartate (NMDA) receptor, in a sparse population of layer 4 neurons reduced activity correlation between GluN1 knockout neuron pairs within a barrel. Our results provide evidence for the involvement of layer 4 neuron NMDA receptors in spatial organization of the spontaneous firing activity of layer 4 neurons in the neonatal barrel cortex. In the talk I will introduce our strategy to analyze the role of NMDA receptor-dependent correlated activity in the layer 4 circuit formation.
On the implicit bias of SGD in deep learning
Tali's work emphasized the tradeoff between compression and information preservation. In this talk I will explore this theme in the context of deep learning. Artificial neural networks have recently revolutionized the field of machine learning. However, we still do not have sufficient theoretical understanding of how such models can be successfully learned. Two specific questions in this context are: how can neural nets be learned despite the non-convexity of the learning problem, and how can they generalize well despite often having more parameters than training data. I will describe our recent work showing that gradient-descent optimization indeed leads to 'simpler' models, where simplicity is captured by lower weight norm and in some cases clustering of weight vectors. We demonstrate this for several teacher and student architectures, including learning linear teachers with ReLU networks, learning boolean functions and learning convolutional pattern detection architectures.
Cluster Headache: Improving Therapy for the Worst Pain Experienced by Humans
Cluster headache is a brain disorder dominated clinically by dreadful episodes of excruciating pain with a circadian pattern and most often focused in bouts with circannual periodicity. As we have understood its neurobiology new therapies, including those directed at calcitonin gene-related peptide, are helpful improve the lives of sufferers.
Using Human Stem Cells to Uncover Genetic Epilepsy Mechanisms
Reprogramming somatic cells to a pluripotent state via the induced pluripotent stem cell (iPSC) method offers an increasingly utilized approach for neurological disease modeling with patient-derived cells. Several groups, including ours, have applied the iPSC approach to model severe genetic developmental and epileptic encephalopathies (DEEs) with patient-derived cells. Although most studies to date involve 2-D cultures of patient-derived neurons, brain organoids are increasingly being employed to explore genetic DEE mechanisms. We are applying this approach to understand PMSE (Polyhydramnios, Megalencephaly and Symptomatic Epilepsy) syndrome, Rett Syndrome (in collaboration with Ben Novitch at UCLA) and Protocadherin-19 Clustering Epilepsy (PCE). I will describe our findings of robust structural phenotypes in PMSE and PCE patient-derived brain organoid models, as well as functional abnormalities identified in fusion organoid models of Rett syndrome. In addition to showing epilepsy-relevant phenotypes, both 2D and brain organoid cultures offer platforms to identify novel therapies. We will also discuss challenges and recent advances in the brain organoid field, including a new single rosette brain organoid model that we have developed. The field is advancing rapidly and our findings suggest that brain organoid approaches offers great promise for modeling genetic neurodevelopmental epilepsies and identifying precision therapies.
Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses
There is little consensus on the level of spatial complexity at which dendrites operate. On the one hand, emergent evidence indicates that synapses cluster at micrometer spatial scales. On the other hand, most modelling and network studies ignore dendrites altogether. This dichotomy raises an urgent question: what is the smallest relevant spatial scale for understanding dendritic computation? We have developed a method to construct compartmental models at any level of spatial complexity. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models. Thus, we are able to systematically construct passive as well as active dendrite models at varying degrees of spatial complexity. We evaluate which elements of the dendritic computational repertoire are captured by these models. We show that many canonical elements of the dendritic computational repertoire can be reproduced with few compartments. For instance, for a model to behave as a two-layer network, it is sufficient to fit a reduced model at the soma and at locations at the dendritic tips. In the basal dendrites of an L2/3 pyramidal model, we reproduce the backpropagation of somatic action potentials (APs) with a single dendritic compartment at the tip. Further, we obtain the well-known Ca-spike coincidence detection mechanism in L5 Pyramidal cells with as few as eleven compartments, the requirement being that their spacing along the apical trunk supports AP backpropagation. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Consequently, when the average conductance load on distal synapses is constant, the dendritic tree can be simplified while appropriately decreasing synaptic weights. When the conductance level fluctuates strongly, for instance through a-priori unpredictable fluctuations in NMDA activation, a constant weight rescale factor cannot be found, and the dendrite cannot be simplified. We have created an open source Python toolbox (NEAT - https://neatdend.readthedocs.io/en/latest/) that automatises the simplification process. A NEST implementation of the reduced models, currently under construction, will enable the simulation of few-compartment models in large-scale networks, thus bridging the gap between cellular and network level neuroscience.
A fresh look at the bird retina
I am working on the vertebrate retina, with a main focus on the mouse and bird retina. Currently my work is focused on three major topics: Functional and molecular analysis of electrical synapses in the retina Circuitry and functional role of retinal interneurons: horizontal cells Circuitry for light-dependent magnetoreception in the bird retina Electrical synapses Electrical synapses (gap junctions) permit fast transmission of electrical signals and passage of metabolites by means of channels, which directly connect the cytoplasm of adjoining cells. A functional gap junction channel consists of two hemichannels (one provided by each of the cells), each comprised of a set of six protein subunits, termed connexins. These building blocks exist in a variety of different subtypes, and the connexin composition determines permeability and gating properties of a gap junction channel, thereby enabling electrical synapses to meet a diversity of physiological requirements. In the retina, various connexins are expressed in different cell types. We study the cellular distribution of different connexins as well as the modulation induced by transmitter action or change of ambient light levels, which leads to altered electrical coupling properties. We are also interested in exploiting them as therapeutic avenue for retinal degeneration diseases. Horizontal cells Horizontal cells receive excitatory input from photoreceptors and provide feedback inhibition to photoreceptors and feedforward inhibition to bipolar cells. Because of strong electrical coupling horizontal cells integrate the photoreceptor input over a wide area and are thought to contribute to the antagonistic organization of bipolar cell and ganglion cell receptive fields and to tune the photoreceptor–bipolar cell synapse with respect to the ambient light conditions. However, the extent to which this influence shapes retinal output is unclear, and we aim to elucidate the functional importance of horizontal cells for retinal signal processing by studying various transgenic mouse models. Retinal circuitry for light-dependent magnetoreception in the bird We are studying which neuronal cell types and pathways in the bird retina are involved in the processing of magnetic signals. Likely, magnetic information is detected in cryptochrome-expressing photoreceptors and leaves the retina through ganglion cell axons that project via the thalamofugal pathway to Cluster N, a part of the visual wulst essential for the avian magnetic compass. Thus, we aim to elucidate the synaptic connections and retinal signaling pathways from putatively magnetosensitive photoreceptors to thalamus-projecting ganglion cells in migratory birds using neuroanatomical and electrophysiological techniques.
Energy landscapes, order and disorder, and protein sequence coevolution: From proteins to chromosome structure
In vivo, the human genome folds into a characteristic ensemble of 3D structures. The mechanism driving the folding process remains unknown. A theoretical model for chromatin (the minimal chromatin model) explains the folding of interphase chromosomes and generates chromosome conformations consistent with experimental data is presented. The energy landscape of the model was derived by using the maximum entropy principle and relies on two experimentally derived inputs: a classification of loci into chromatin types and a catalog of the positions of chromatin loops. This model was generalized by utilizing a neural network to infer these chromatin types using epigenetic marks present at a locus, as assayed by ChIP-Seq. The ensemble of structures resulting from these simulations completely agree with HI-C data and exhibits unknotted chromosomes, phase separation of chromatin types, and a tendency for open chromatin to lie at the periphery of chromosome territories. Although this theoretical methodology was trained in one cell line, the human GM12878 lymphoblastoid cells, it has successfully predicted the structural ensembles of multiple human cell lines. Finally, going beyond Hi-C, our predicted structures are also consistent with microscopy measurements. Analysis of both structures from simulation and microscopy reveals that short segments of chromatin make two-state transitions between closed conformations and open dumbbell conformations. For gene active segments, the vast majority of genes appear clustered in the linker region of the chromatin segment, allowing us to speculate possible mechanisms by which chromatin structure and dynamics may be involved in controlling gene expression. * Supported by the NSF
From genetics to neurobiology through transcriptomic data analysis
Over the past years, genetic studies have uncovered hundreds of genetic variants to be associated with complex brain disorders. While this really represents a big step forward in understanding the genetic etiology of brain disorders, the functional interpretation of these variants remains challenging. We aim to help with the functional characterization of variants through transcriptomic data analysis. For instance, we rely on brain transcriptome atlases, such as Allen Brain Atlases, to infer functional relations between genes. One example of this is the identification of signaling mechanisms of steroid receptors. Further, by integrating brain transcriptome atlases with neuropathology and neuroimaging data, we identify key genes and pathways associated with brain disorders (e.g. Parkinson's disease). With technological advances, we can now profile gene expression in single-cells at large scale. These developments have presented significant computational developments. Our lab focuses on developing scalable methods to identify cells in single-cell data through interactive visualization, scalable clustering, classification, and interpretable trajectory modelling. We also work on methods to integrate single-cell data across studies and technologies.
Spatiotemporal patterns of neocortical activity around hippocampal sharp-wave ripples
Neocortical-hippocampal interactions during off-line periods such as slow-wave sleep are implicated in memory processing. In particular, recent memory traces are replayed in hippocampus during some sharp-wave ripple (SWR) events, and these replay events are positively correlated with neocortical memory trace reactivation. A prevalent model is that SWR arise ‘spontaneously’ in CA3 and propagate recent memory ‘indices’ outward to the neocortex to enable memory consolidation there; however, the spatiotemporal distribution of neocortical activation relative to SWR is incompletely understood. We used wide-field optical imaging to study voltage and glutamate release transients in dorsal neocortex in relation to CA1 multiunit activity (MUA) and SWR of sleeping and urethane anesthetized mice. Modulation of voltage and glutamate release signals in relation to SWRs varied across superficial neocortical regions, and it was largest in posteromedial regions surrounding retrosplenial cortex (RSC), which receives strong hippocampal output connections. Activity tended to spread sequentially from more medial towards more lateral regions. Contrary to the unidirectional hypothesis, activation exhibited a continuum of timing relative to SWRs, varying from neocortex leading to neocortex lagging the SWRs (± ~250 msec). The timing continuum was correlated with the skewness of peri-SWR hippocampal MUA and with a tendency for some SWR to occur in clusters. Thus, contrary to the model in which SWRs arise spontaneously in hippocampus, neocortical activation often precedes SWRs and may thus constitute a trigger event in which neocortical information seeds associative reactivation of hippocampal ‘indices’.
A metabolic function of the hippocampal sharp wave-ripple
The hippocampal formation has been implicated in both cognitive functions as well as the sensing and control of endocrine states. To identify a candidate activity pattern which may link such disparate functions, we simultaneously measured electrophysiological activity from the hippocampus and interstitial glucose concentrations in the body of freely behaving rats. We found that clusters of sharp wave-ripples (SPW-Rs) recorded from both dorsal and ventral hippocampus reliably predicted a decrease in peripheral glucose concentrations within ~10 minutes. This correlation was less dependent on circadian, ultradian, and meal-triggered fluctuations, it could be mimicked with optogenetically induced ripples, and was attenuated by pharmacogenetically suppressing activity of the lateral septum, the major conduit between the hippocampus and subcortical structures. Our findings demonstrate that a novel function of the SPW-R is to modulate peripheral glucose homeostasis and offer a mechanism for the link between sleep disruption and blood glucose dysregulation seen in type 2 diabetes and obesity.
The retrotrapezoid nucleus: an integrative and interoceptive hub in neural control of breathing
In this presentation, we will discuss the cellular and molecular properties of the retrotrapezoid nucleus (RTN), an integrative and interoceptive control node for the respiratory motor system. We will present the molecular profiling that has allowed definitive identification of a cluster of tonically active neurons that provide a requisite drive to the respiratory central pattern generator (CPG) and other pre-motor neurons. We will discuss the ionic basis for steady pacemaker-like firing, including by a large subthreshold oscillation; and for neuromodulatory influences on RTN activity, including by arousal state-dependent neurotransmitters and CO2/H+. The CO2/H+-dependent modulation of RTN excitability represents the sensory component of a homeostatic system by which the brain regulates breathing to maintain blood gases and tissue pH; it relies on two intrinsic molecular proton detectors, both a proton-activated G protein-coupled receptor (GPR4) and a proton-inhibited background K+ channel (TASK-2). We will also discuss downstream neurotransmitter signaling to the respiratory CPG, focusing especially on a newly-identified peptidergic modulation of the preBötzinger complex that becomes activated following birth and the initiation of air breathing. Finally, we will suggest how the cellular and molecular properties of RTN neurons identified in rodent models may contribute to understanding human respiratory disorders, such as congenital central hypoventilation syndrome (CCHS) and sudden infant death syndrome (SIDS).
Self-organization of chemically active colloids with non-reciprocal interactions
Cells and microorganisms produce and consume all sorts of chemicals, from nutrients to signalling molecules. The same happens at the nanoscale inside cells themselves, where enzymes catalyse the production and consumption of the chemicals needed for life. In this work, we have found a generic mechanism by which such chemically-active particles, be it cells or enzymes or engineered synthetic colloids, can "sense" each other and ultimately self- organize in a multitude of ways. A peculiarity of these chemical-mediated interactions is that they break action-reaction symmetry : for example, one particle may be repelled from a second particle, which is in turn attracted to the first one, so that it ends up "chasing" it. Such chasing interactions allow for the formation of large clusters of particles that "swim" autonomously. Regarding enzymes, we find that they can spontaneously aggregate into clusters with precisely the right composition, so that the product of one enzyme is passed on, without lack or excess, to the next enzyme in the metabolic cascade.
Exploring Memories of Scenes
State-of-the-art machine vision models can predict human recognition memory for complex scenes with astonishing accuracy. In this talk I present work that investigated how memorable scenes are actually remembered and experienced by human observers. We found that memorable scenes were recognized largely based on recollection of specific episodic details but also based on familiarity for an entire scene. I thus highlight current limitations in machine vision models emulating human recognition memory, with promising opportunities for future research. Moreover, we were interested in what observers specifically remember about complex scenes. We thus considered the functional role of eye-movements as a window into the content of memories, particularly when observers recollected specific information about a scene. We found that when observers formed a memory representation that they later recollected (compared to scenes that only felt familiar), the overall extent of exploration was broader, with a specific subset of fixations clustered around later to-be-recollected scene content, irrespective of the memorability of a scene. I discuss the critical role that our viewing behavior plays in visual memory formation and retrieval and point to potential implications for machine vision models predicting the content of human memories.
Inertial active soft matter
Active particles which are self-propelled by converting energy into mechanical motion represent an expanding research realm in physics and chemistry. For micron-sized particles moving in a liquid (``microswimmers''), most of the basic features have been described by using the model of overdamped active Brownian motion [1]. However, for macroscopic particles or microparticles moving in a gas, inertial effects become relevant such that the dynamics is underdamped. Therefore, recently, active particles with inertia have been described by extending the active Brownian motion model to active Langevin dynamics which include inertia [2]. In this talk, recent developments of active particles with inertia (``microflyers'', ``hoppers'' or ``runners'') are summarized including: inertial delay effects between particle velocity and self-propulsion direction [3], tuning of the long-time self-diffusion by the moment of inertia [3], the influence of inertia on motility-induced phase separation and the cluster growth exponent [4], and the formation of active micelles (“rotelles”) by using inertial active surfactants. References [1] C. Bechinger, R. di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, G. Volpe, Reviews of Modern Physics 88, 045006 (2016). [2] H. Löwen, Journal of Chemical Physics 152, 040901 (2020). [3] C. Scholz, S. Jahanshahi, A. Ldov, H. Löwen, Nature Communications 9, 5156 (2018). [4] S. Mandal, B. Liebchen, H. Löwen, Physical Review Letters 123, 228001 (2019). [5] C. Scholz, A. Ldov, T. Pöschel, M. Engel, H. Löwen, Surfactants and rotelles in active chiral fluids, will be published
Finding the Fault Lines: Detecting Urban Social Boundaries using Social Data Science
In urban environments, social boundaries are the areas that emerge from processes of economic inequality and social segregation. These boundaries are important, as they serve both as areas of interaction and conflict. By applying geographical thinking to classic methods in data science, we can better understand where these boundaries emerge and how they delineate communities. In this talk, I’ll explain a bit about the basics of “boundary detection” in urban analytics. I’ll present a new method, the “geosilhouette,” that builds on previous methods of identifying the boundaries between clusters. And, finally, I’ll show how this can change our understanding of urban community.
A machine learning way to analyse white matter tractography streamlines / Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI
1. Embedding is all you need: A machine learning way to analyse white matter tractography streamlines - Dr Shenjun Zhong, Monash Biomedical Imaging Embedding white matter streamlines with various lengths into fixed-length latent vectors enables users to analyse them with general data mining techniques. However, finding a good embedding schema is still a challenging task as the existing methods based on spatial coordinates rely on manually engineered features, and/or labelled dataset. In this webinar, Dr Shenjun Zhong will discuss his novel deep learning model that identifies latent space and solves the problem of streamline clustering without needing labelled data. Dr Zhong is a Research Fellow and Informatics Officer at Monash Biomedical Imaging. His research interests are sequence modelling, reinforcement learning and federated learning in the general medical imaging domain. 2. Application of artificial intelligence in correcting motion artifacts and reducing scan time in MRI - Dr Kamlesh Pawar, Monash Biomedical imaging Magnetic Resonance Imaging (MRI) is a widely used imaging modality in clinics and research. Although MRI is useful it comes with an overhead of longer scan time compared to other medical imaging modalities. The longer scan times also make patients uncomfortable and even subtle movements during the scan may result in severe motion artifact in the images. In this seminar, Dr Kamlesh Pawar will discuss how artificial intelligence techniques can reduce scan time and correct motion artifacts. Dr Pawar is a Research Fellow at Monash Biomedical Imaging. His research interest includes deep learning, MR physics, MR image reconstruction and computer vision.
Acoustically Levitated Granular Matter
Granular matter can serve as a prototype for exploring the rich physics of many-body systems driven far from equilibrium. This talk will outline a new direction for granular physics with macroscopic particles, where acoustic levitation compensates the forces due to gravity and eliminates frictional interactions with supporting surfaces in order to focus on particle interactions. Levitating small particles by intense ultrasound fields in air makes it possible to manipulate and control their positions and assemble them into larger aggregates. The small air viscosity implies that the regime of underdamped dynamics can be explored, where inertial effects are important, in contrast to typical colloids in a liquid, where inertia can be neglected. Sound scattered off individual, levitated solid particles gives rise to controllable attractive forces with neighboring particles. I will discuss some of the key concepts underlying acoustic levitation, describe how detuning an acoustic cavity can introduce active fluctuations that control the assembly statistics of small levitated particles clusters, and give examples of how interactions between neighboring levitated objects can be controlled by their shape.
Organization of Midbrain Serotonin System
The serotonin system is the most frequently targeted neural system pharmacologically for treating psychiatric disorders, including depression and anxiety. Serotonin neurons of the dorsal and median raphe nuclei (DR, MR) collectively innervate the entire forebrain and midbrain, modulating diverse physiology and behaviour. By using viral-genetic methods, we found that DR serotonin system contains parallel sub-systems that differ in input and output connectivity, physiological response properties, and behavioural functions. To gain a fundamental understanding of the molecular heterogeneity of DR and MR, we used single-cell RNA - sequencing (scRNA-seq) to generate a comprehensive dataset comprising eleven transcriptomically distinct serotonin neuron clusters. We generated novel intersectional viral-genetic tools to access specific subpopulations. Whole-brain axonal projection mapping revealed that the molecular features of these distinct serotonin groups reflect their anatomical organization and provide tools for future exploration of the full projection map of molecularly defined serotonin groups. The molecular architecture of serotonin system lays the foundation for integrating anatomical, neurochemical, physiological, and behavioural functions.
Interacting synapses stabilise both learning and neuronal dynamics in biological networks
Distinct synapses influence one another when they undergo changes, with unclear consequences for neuronal dynamics and function. Here we show that synapses can interact such that excitatory currents are naturally normalised and balanced by inhibitory inputs. This happens when classical spike-timing dependent synaptic plasticity rules are extended by additional mechanisms that incorporate the influence of neighbouring synaptic currents and regulate the amplitude of efficacy changes accordingly. The resulting control of excitatory plasticity by inhibitory activation, and vice versa, gives rise to quick and long-lasting memories as seen experimentally in receptive field plasticity paradigms. In models with additional dendritic structure, we observe experimentally reported clustering of co-active synapses that depends on initial connectivity and morphology. Finally, in recurrent neural networks, rich and stable dynamics with high input sensitivity emerge, providing transient activity that resembles recordings from the motor cortex. Our model provides a general framework for codependent plasticity that frames individual synaptic modifications in the context of population-wide changes, allowing us to connect micro-level physiology with behavioural phenomena.
An Adaptive-Gravity Model for Insect Swarms: Miniature Star-Clusters Buzzing Above Your Heads in the Park
Global AND Scale-Free? Spontaneous cortical dynamics between functional networks and cortico-hippocampal communication
Recent advancements in anatomical and functional imaging emphasize the presence of whole-brain networks organized according to functional and connectivity gradients, but how such structure shapes activity propagation and memory processes still lacks asatisfactory model. We analyse the fine-grained spatiotemporal dynamics of spontaneous activity in the entire dorsal cortex. through simultaneous recordings of wide-field voltage sensitive dye transients (VS), cortical ECoG, and hippocampal LFP in anesthetized mice. Both VS and ECoG show cortical avalanches. When measuring avalanches from the VS signal, we find a major deviation of the size scaling from the power-law distribution predicted by the criticality hypothesis and well approximated by the results from the ECoG. Breaking from scale-invariance, avalanches can thus be grouped in two regimes. Small avalanches consists of a limited number of co-activation modes involving a sub-set of cortical networks (related to the Default Mode Network), while larger avalanches involve a substantial portion of the cortical surface and can be clustered into two families: one immediately preceded by Retrosplenial Cortex activation and mostly involving medial-posterior networks, the other initiated by Somatosensory Cortex and extending preferentially along the lateral-anterior region. Rather than only differing in terms of size, these two set of events appear to be associated with markedly different brain-wide dynamical states: they are accompaniedby a shift in the hippocampal LFP, from the ripple band (smaller) to the gamma band (larger avalanches), and correspond to opposite directionality in the cortex-to-hippocampus causal relationship. These results provide a concrete description of global cortical dynamics, and shows how cortex in its entirety is involved in bi-directional communication in the hippocampus even in sleep-like states.
Collective Ecophysiology and Physics of Social Insects
Collective behavior of organisms creates environmental micro-niches that buffer them from environmental fluctuations e.g., temperature, humidity, mechanical perturbations, etc., thus coupling organismal physiology, environmental physics, and population ecology. This talk will focus on a combination of biological experiments, theory, and computation to understand how a collective of bees can integrate physical and behavioral cues to attain a non-equilibrium steady state that allows them to resist and respond to environmental fluctuations of forces and flows. We analyze how bee clusters change their shape and connectivity and gain stability by spread-eagling themselves in response to mechanical perturbations. Similarly, we study how bees in a colony respond to environmental thermal perturbations by deploying a fanning strategy at the entrance that they use to create a forced ventilation stream that allows the bees to collectively maintain a constant hive temperature. When combined with quantitative analysis and computations in both systems, we integrate the sensing of the environmental cues (acceleration, temperature, flow) and convert them to behavioral outputs that allow the swarms to achieve a dynamic homeostasis.
Multitask performance humans and deep neural networks
Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.
The developing visual brain – answers and questions
We will start our talk with a short video of our research, illustrating methods (some old and new) and findings that have provided our current understanding of how visual capabilities develop in infancy and early childhood. However, our research poses some outstanding questions. We will briefly discuss three issues, which are linked by a common focus on the development of visual attentional processing: (1) How do recurrent cortical loops contribute to development? Cortical selectivity (e.g., to orientation, motion, and binocular disparity) develops in the early months of life. However, these systems are not purely feedforward but depend on parallel pathways, with recurrent feedback loops playing a critical role. The development of diverse networks, particularly for motion processing, may explain changes in dynamic responses and resolve developmental data obtained with different methodologies. One possible role for these loops is in top-down attentional control of visual processing. (2) Why do hyperopic infants become strabismic (cross-eyes)? Binocular interaction is a particularly sensitive area of development. Standard clinical accounts suppose that long-sighted (hyperopic) refractive errors require accommodative effort, putting stress on the accommodation-convergence link that leads to its breakdown and strabismus. Our large-scale population screening studies of 9-month infants question this: hyperopic infants are at higher risk of strabismus and impaired vision (amblyopia and impaired attention) but these hyperopic infants often under- rather than over-accommodate. This poor accommodation may reflect poor early attention processing, possibly a ‘soft sign’ of subtle cerebral dysfunction. (3) What do many neurodevelopmental disorders have in common? Despite similar cognitive demands, global motion perception is much more impaired than global static form across diverse neurodevelopmental disorders including Down and Williams Syndromes, Fragile-X, Autism, children with premature birth and infants with perinatal brain injury. These deficits in motion processing are associated with deficits in other dorsal stream functions such as visuo-motor co-ordination and attentional control, a cluster we have called ‘dorsal stream vulnerability’. However, our neuroimaging measures related to motion coherence in typically developing children suggest that the critical areas for individual differences in global motion sensitivity are not early motion-processing areas such as V5/MT, but downstream parietal and frontal areas for decision processes on motion signals. Although these brain networks may also underlie attentional and visuo-motor deficits , we still do not know when and how these deficits differ across different disorders and between individual children. Answering these questions provide necessary steps, not only increasing our scientific understanding of human visual brain development, but also in designing appropriate interventions to help each child achieve their full potential.
When spontaneous waves meet angiogenesis: a case study from the neonatal retina
By continuously producing electrical signals, neurones are amongst the most energy-demanding cells in the organism. Resting ionic levels are restored via metabolic pumps that receive the necessary energy from oxygen supplied by blood vessels. Intense spontaneous neural activity is omnipresent in the developing CNS. It occurs during short, well-defined periods that coincide precisely with the timing of angiogenesis. Such coincidence cannot be random; there must be a universal mechanism triggering spontaneous activity concurrently with blood vessels invading neural territories for the first time. However, surprisingly little is known about the role of neural activity per se in guiding angiogenesis. Part of the reason is that it is challenging to study developing neurovascular networks in tri-dimensional space in the brain. We investigate these questions in the neonatal mouse retina, where blood vessels are much easier to visualise because they initially grow in a plane, while waves of spontaneous neural activity (spreading via cholinergic starburst amacrine cells) sweep across the retinal ganglion cell layer, in close juxtaposition with the growing vasculature. Blood vessels reach the periphery by postnatal day (P) 7-8, shortly before the cholinergic waves disappear (at P10). We discovered transient clusters of auto-fluorescent cells that form an annulus around the optic disc, gradually expanding to the periphery, which they reach at the same time as the growing blood vessels. Remarkably, these cells appear locked to the frontline of the growing vasculature. Moreover, by recording waves with a large-scale multielectrode array that enables us to visualise them at pan-retinal level, we found that their initiation points are not random; they follow a developmental centre-to-periphery pattern similar to the clusters and blood vessels. The density of growing blood vessels is higher in cluster areas than in-between clusters at matching eccentricity. The cluster cells appear to be phagocytosed by microglia. Blocking Pannexin1 (PANX1) hemichannels activity with probenecid completely blocks the spontaneous waves and results in the disappearance of the fluorescent cell clusters. We suggest that these transient cells are specialised, hyperactive neurones that form spontaneous activity hotspots, thereby triggering retinal waves through the release of ATP via PANX1 hemichannels. These activity hotspots attract new blood vessels to enhance local oxygen supply. Signalling through PANX1 attracts microglia that establish contact with these cells, eventually eliminating them once blood vessels have reached their vicinity. The auto-fluorescence that characterises the cell clusters may develop only once the process of microglial phagocytosis is initiated.
“Cell Surface Topography: The Role of Protein Size at Cell-Cell Interfaces”
Membrane interfaces formed at junctions between cells are often associated with characteristic patterns of membrane protein organization, such as in epithelial tissues and between cells of the immune system. While this organization can be influenced by receptor clustering, lipid domain formation, and cytoskeletal dynamics, this talk will describe how cell surface molecular height can directly contribute to the spatial arrangement of membrane proteins and downstream signaling. Using a new optical method for characterizing molecular height, together with experiments using giant vesicles in vitro systems and live immune cells, we are investigating how cell surface molecular heights can be key contributors to cell-cell communication.
Local and global organization of synaptic inputs on cortical dendrites
Synaptic inputs on cortical dendrites are organized with remarkable subcellular precision at the micron level. This organization emerges during early postnatal development through patterned spontaneous activity and manifests both locally where synapses with similar functional properties are clustered, and globally along the axis from dendrite to soma. Recent experiments reveal species-specific differences in the local and global synaptic organization in mouse, ferret and macaque visual cortex. I will present a computational framework that implements functional and structural plasticity from spontaneous activity patterns to generate these different types of organization across species and scales. Within this framework, a single anatomical factor - the size of the visual cortex and the resulting magnification of visual space - can explain the observed differences. This allows us to make predictions about the organization of synapses also in other species and indicates that the proximal-distal axis of a dendrite might be central in endowing a neuron with powerful computational capabilities.
Physiological importance of phase separation: a case study in synapse formation
Synapse formation during neuronal development is critical to establish neural circuits and a nervous system1. Every presynapse builds a core active zone structure where ion channels are clustered and synaptic vesicles are released2. While the composition of active zones is well characterized2,3, how active zone proteins assemble together and recruit synaptic release machinery during development is not clear. Here, we find core active zone scaffold proteins SYD-2/Liprin-α and ELKS-1 phase separate during an early stage of synapse development, and later mature into a solid structure. We directly test the in vivo function of phase separation with mutants specifically lacking this activity. These mutant SYD-2 and ELKS-1 proteins remain enriched at synapses, but are defective in active zone assembly and synapse function. The defects are rescued with the introduction of a phase separation motif from an unrelated protein. In vitro, we reconstitute the SYD-2 and ELKS-1 liquid phase scaffold and find it is competent to bind and incorporate downstream active zone components. The fluidity of SYD-2 and ELKS-1 condensates is critical for efficient mixing and incorporation of active zone components. These data reveal that a developmental liquid phase of scaffold molecules is essential for synaptic active zone assembly before maturation into a stable final structure.
Genetic dissection of the Fgf5 enhancer cluster
Delineating Reward/Avoidance Decision Process in the Impulsive-compulsive Spectrum Disorders through a Probabilistic Reversal Learning Task
Impulsivity and compulsivity are behavioural traits that underlie many aspects of decision-making and form the characteristic symptoms of Obsessive Compulsive Disorder (OCD) and Gambling Disorder (GD). The neural underpinnings of aspects of reward and avoidance learning under the expression of these traits and symptoms are only partially understood. " "The present study combined behavioural modelling and neuroimaging technique to examine brain activity associated with critical phases of reward and loss processing in OCD and GD. " "Forty-two healthy controls (HC), forty OCD and twenty-three GD participants were recruited in our study to complete a two-session reinforcement learning (RL) task featuring a “probability switch (PS)” with imaging scanning. Finally, 39 HC (20F/19M, 34 yrs +/- 9.47), 28 OCD (14F/14M, 32.11 yrs ±9.53) and 16 GD (4F/12M, 35.53yrs ± 12.20) were included with both behavioural and imaging data available. The functional imaging was conducted by using 3.0-T SIEMENS MAGNETOM Skyra syngo MR D13C at Monash Biomedical Imaging. Each volume compromised 34 coronal slices of 3 mm thickness with 2000 ms TR and 30 ms TE. A total of 479 volumes were acquired for each participant in each session in an interleaved-ascending manner. " " The standard Q-learning model was fitted to the observed behavioural data and the Bayesian model was used for the parameter estimation. Imaging analysis was conducted using SPM12 (Welcome Department of Imaging Neuroscience, London, United Kingdom) in the Matlab (R2015b) environment. The pre-processing commenced with the slice timing, realignment, normalization to MNI space according to T1-weighted image and smoothing with a 8 mm Gaussian kernel. " " The frontostriatal brain circuit including the putamen and medial orbitofrontal (mOFC) were significantly more active in response to receiving reward and avoiding punishment compared to receiving an aversive outcome and missing reward at 0.001 with FWE correction at cluster level; While the right insula showed greater activation in response to missing rewards and receiving punishment. Compared to healthy participants, GD patients showed significantly lower activation in the left superior frontal and posterior cingulum at 0.001 for the gain omission. " " The reward prediction error (PE) signal was found positively correlated with the activation at several clusters expanding across cortical and subcortical region including the striatum, cingulate, bilateral insula, thalamus and superior frontal at 0.001 with FWE correction at cluster level. The GD patients showed a trend of decreased reward PE response in the right precentral extending to left posterior cingulate compared to controls at 0.05 with FWE correction. " " The aversive PE signal was negatively correlated with brain activity in regions including bilateral thalamus, hippocampus, insula and striatum at 0.001 with FWE correction. Compared with the control group, GD group showed an increased aversive PE activation in the cluster encompassing right thalamus and right hippocampus, and also the right middle frontal extending to the right anterior cingulum at 0.005 with FWE correction. " " Through the reversal learning task, the study provided a further support of the dissociable brain circuits for distinct phases of reward and avoidance learning. Also, the OCD and GD is characterised by aberrant patterns of reward and avoidance processing.
Untangling the web of behaviours used to produce spider orb webs
Many innate behaviours are the result of multiple sensorimotor programs that are dynamically coordinated to produce higher-order behaviours such as courtship or architecture construction. Extendend phenotypes such as architecture are especially useful for ethological study because the structure itself is a physical record of behavioural intent. A particularly elegant and easily quantifiable structure is the spider orb-web. The geometric symmetry and regularity of these webs have long generated interest in their behavioural origin. However, quantitative analyses of this behaviour have been sparse due to the difficulty of recording web-making in real-time. To address this, we have developed a novel assay enabling real-time, high-resolution tracking of limb movements and web structure produced by the hackled orb-weaver Uloborus diversus. With its small brain size of approximately 100,000 neurons, the spider U. diversus offers a tractable model organism for the study of complex behaviours. Using deep learning frameworks for limb tracking, and unsupervised behavioural clustering methods, we have developed an atlas of stereotyped movement motifs and are investigating the behavioural state transitions of which the geometry of the web is an emergent property. In addition to tracking limb movements, we have developed algorithms to track the web’s dynamic graph structure. We aim to model the relationship between the spider’s sensory experience on the web and its motor decisions, thereby identifying the sensory and internal states contributing to this sensorimotor transformation. Parallel efforts in our group are establishing 2-photon in vivo calcium imaging protocols in this spider, eventually facilitating a search for neural correlates underlying the internal and sensory state variables identified by our behavioural models. In addition, we have assembled a genome, and are developing genetic perturbation methods to investigate the genetic underpinnings of orb-weaving behaviour. Together, we aim to understand how complex innate behaviours are coordinated by underlying neuronal and genetic mechanisms.
Wiring up direction selective circuits in the retina
The development of neural circuits is profoundly impacted by both spontaneous and sensory experience. This is perhaps most well studied in the visual system, where disruption of early spontaneous activity called retinal waves prior to eye opening and visual deprivation after eye opening leads to alterations in the response properties and connectivity in several visual centers in the brain. We address this question in the retina, which comprises multiple circuits that encode different features of the visual scene, culminating in over 40 different types of retinal ganglion cells. Direction-selective ganglion cells respond strongly to an image moving in the preferred direction and weakly to an image moving in the opposite, or null, direction. Moreover, as recently described (Sabbah et al, 2017) the preferred directions of direction selective ganglion cells cluster along four directions that align along two optic flow axes, causing variation of the relative orientation of preferred directions along the retinal surface. I will provide recent progress in the lab that addresses the role of visual experience and spontaneous retinal waves in the establishment of direction selective tuning and direction selectivity maps in the retina.
Dendritic nonlinearities and synapse type-specific input clustering enable the development of input selectivity in a wide range of settings.
Bernstein Conference 2024
The role of gap junctions and clustered connectivity in emergent synchronisation patterns of spiking inhibitory neuronal networks
Bernstein Conference 2024
Unsupervised clustering of burst shapes reveals the increasing complexity of developing networks in vitro
Bernstein Conference 2024
Clustered recurrent connectivity promotes the development of E/I co-tuning via synaptic plasticity
COSYNE 2022
Weighted clustering motifs and small-worldness in connectomes
COSYNE 2022
Weighted clustering motifs and small-worldness in connectomes
COSYNE 2022
Circuit-based framework for fine spatial scale clustering of orientation tuning in mouse V1
COSYNE 2023
Clustering Inductive Biases with Unrolled Networks
COSYNE 2023
Clustered representation of vocalizations in the auditory midbrain of the echolocating bat
COSYNE 2023
Place Cells are Clustered by Field Location in CA1 Hippocampus
COSYNE 2023
Synaptic-type-specific clustering optimizes the computational capabilities of balanced recurrent networks
COSYNE 2023
Dynamics of clustered spiking networks via the CTLN model
COSYNE 2025
Activity-dependent regulation of clustered gamma protocadherins (c-γPCDH) expression during circuit assembly in Purkinje cells
FENS Forum 2024
Automatized curation of spike sorting clusters
FENS Forum 2024
Characterization of dendritic spine morphology through a segmentation-clusterization approach
FENS Forum 2024
Clustering visual sensory neurons based on their invariances
FENS Forum 2024
Impact of barrel cortex lesions and sensory deprivation on perceptual decision-making: Insights from computer vision and time series clustering of freely moving behavioral strategies
FENS Forum 2024
Redox-dependent synaptic clustering of gephyrin
FENS Forum 2024
Synapsin 2a tetramerisation selectively controls the nanoscale clustering of reserve synaptic vesicles at the hippocampal presynapse
FENS Forum 2024
Targeting clusterin for therapeutic intervention in gliomas
FENS Forum 2024
Three-dimensional organization of connexin clusters along axonal cisternal organelle networks in human pyramidal neurons
FENS Forum 2024