Gating
gating
sensorimotor control, mouvement, touch, EEG
Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.
Boris Gutkin
A three-year post-doctoral position in theoretical neuroscience is open to explore the mechanisms of interaction between interoceptive cardiac and exteroceptive tactile inputs at the cortical level. We aim to develop and validate a computational model of cardiac and of a somatosensory cortical circuit dynamics in order to determine the conditions under which interactions between exteroceptive and interoceptive inputs occur and which underlying mechanism (e.g., phase-resetting, gating, phasic arousal,..) best explain experimental data. The postdoctoral fellow will be based at the Group for Neural Theory at LNC2, in Boris Gutkin’s team with strong interactions with Catherine Tallon-Baudry’s team. LNC2 is located in the center of Paris within the Cognitive Science Department at Ecole Normale Supérieure, with numerous opportunities to interact with the Paris scientific community at large, in a stimulating and supportive work environment. Group for Neural Theory provides a rich environment and local community for theoretical neuroscience. Lab life is in English, speaking French is not a requirement. Salary according to experience and French rules. Starting date is first semester 2024.
Boris Gutkin, Catherine Tallon-Baudry
A three-year post-doctoral position in theoretical neuroscience is open to explore the mechanisms of interaction between interoceptive cardiac and exteroceptive tactile inputs at the cortical level. We aim to develop data-based computational models of cardiac and somatosensory cortical circuit dynamics. Building on these models we will determine the conditions under which interactions between exteroceptive and interoceptive inputs occur and which underlying mechanisms (e.g., phase-resetting, gating, phasic arousal,..) best explain experimental data.
N/A
A post-doctoral position in theoretical neuroscience is open to explore the impact of cardiac inputs on cortical dynamics. Understanding the role of internal states in human cognition has become a hot topic, with a wealth of experimental results but limited attempts at analyzing the computations that underlie the link between bodily organs and brain. Our particular focus is on elucidating how the different mechanisms for heart-to-cortex coupling (e.g., phase-resetting, gating, phasic arousal,..) can account for human behavioral and neural data, from somatosensory detection to more high-level concepts such as self-relevance, using data-based dynamical models.
Investigating the Neurobiology and Neurophysiology of Psilocybin Using Drosophila melanogaster as a Model System
Digital Minds: Brain Development in the Age of Technology
Digital Minds: Brain Development in the Age of Technology examines how our increasingly connected world shapes mental and cognitive health. From screen time and social media to virtual interactions, this seminar delves into the latest research on how technology influences brain development, relationships, and emotional well-being. Join us to explore strategies for harnessing technology's benefits while mitigating its potential challenges, empowering you to thrive in a digital age.
On finding what you’re (not) looking for: prospects and challenges for AI-driven discovery
Recent high-profile scientific achievements by machine learning (ML) and especially deep learning (DL) systems have reinvigorated interest in ML for automated scientific discovery (eg, Wang et al. 2023). Much of this work is motivated by the thought that DL methods might facilitate the efficient discovery of phenomena, hypotheses, or even models or theories more efficiently than traditional, theory-driven approaches to discovery. This talk considers some of the more specific obstacles to automated, DL-driven discovery in frontier science, focusing on gravitational-wave astrophysics (GWA) as a representative case study. In the first part of the talk, we argue that despite these efforts, prospects for DL-driven discovery in GWA remain uncertain. In the second part, we advocate a shift in focus towards the ways DL can be used to augment or enhance existing discovery methods, and the epistemic virtues and vices associated with these uses. We argue that the primary epistemic virtue of many such uses is to decrease opportunity costs associated with investigating puzzling or anomalous signals, and that the right framework for evaluating these uses comes from philosophical work on pursuitworthiness.
Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms
Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
Investigating dynamiCa++l mechanisms underlying cortical development and disease
Investigating activity-dependent processes during cortical neuronal assembly in development and disease
Where Cognitive Neuroscience Meets Industry: Navigating the Intersections of Academia and Industry
In this talk, Mirta will share her journey from her education a mathematically-focused high school to her currently unconventional career in London, emphasizing the evolution from a local education in Croatia to international experiences in the US and UK. We will explore the concept of interdisciplinary careers in the modern world, viewing them through the framework of increasing demand, flexibility, and dynamism in the current workplace. We will underscore the significance of interdisciplinary research for launching careers outside of academia, and bolstering those within. I will challenge the conventional norm of working either in academia or industry, and encourage discussion about the opportunities for combining the two in a myriad of career opportunities. I’ll use examples from my own and others’ research to highlight opportunities for early career researchers to extend their work into practical applications. Such an approach leverages the strengths of both sectors, fostering innovation and practical applications of research findings. I hope these insights can offer valuable perspectives for those looking to navigate the evolving demands of the global job market, illustrating the advantages of a versatile skill set that spans multiple disciplines and allows extensions into exciting career options.
Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment
Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.
Investigating face processing impairments in Developmental Prosopagnosia: Insights from behavioural tasks and lived experience
The defining characteristic of development prosopagnosia is severe difficulty recognising familiar faces in everyday life. Numerous studies have reported that the condition is highly heterogeneous in terms of both presentation and severity with many mixed findings in the literature. I will present behavioural data from a large face processing test battery (n = 24 DPs) as well as some early findings from a larger survey of the lived experience of individuals with DP and discuss how insights from individuals' real-world experience can help to understand and interpret lab-based data.
Brain network communication: concepts, models and applications
Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.
Representational Connectivity Analysis (RCA): a Method for Investigating Flow of Content-Specific Information in the Brain
Representational Connectivity Analysis (RCA) has gained mounting interest in the past few years. This is because, rather than conventional tracking of signal, RCA allows for the tracking of information across the brain. It can also provide insights into the content and potential transformations of the transferred information. This presentation explains several variations of the method in terms of implementation and how it can be adopted for different modalities (E/MEG and fMRI). I will also present caveats and nuances of the method which should be considered when using the RCA.
Studies on the role of relevance appraisal in affect elicitation
A fundamental question in affective sciences is how the human mind decides if, and in what intensity, to elicit an affective response. Appraisal theories assume that preceding the affective response, there is an evaluation stage in which dimensions of an event are being appraised. Common to most appraisal theories is the assumption that the evaluation phase involves the assessment of the stimulus’ relevance to the perceiver’s well-being. In this talk, I first discuss conceptual and methodological challenges in investigating relevance appraisal. Next, I present two lines of experiments that ask how the human mind uses information about objective and subjective probabilities in the decision about the intensity of the emotional response and how these are affected by the valence of the event. The potential contribution of the results to appraisal theory is discussed.
Computational models of spinal locomotor circuitry
To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.
The Effects of Movement Parameters on Time Perception
Mobile organisms must be capable of deciding both where and when to move in order to keep up with a changing environment; therefore, a strong sense of time is necessary, otherwise, we would fail in many of our movement goals. Despite this intrinsic link between movement and timing, only recently has research begun to investigate the interaction. Two primary effects that have been observed include: movements biasing time estimates (i.e., affecting accuracy) as well as making time estimates more precise. The goal of this presentation is to review this literature, discuss a Bayesian cue combination framework to explain these effects, and discuss the experiments I have conducted to test the framework. The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
Internal representation of musical rhythm: transformation from sound to periodic beat
When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement
Immunosuppression for Parkinson's disease - a new therapeutic strategy?
Caroline Williams-Gray is a Principal Research Associate in the Department of Clinical Neurosciences, University of Cambridge, and an honorary consultant neurologist specializing in Parkinson’s disease and movement disorders. She leads a translational research group investigating the clinical and biological heterogeneity of PD, with the ultimate goal of developing more targeted therapies for different Parkinson’s subtypes. Her recent work has focused on the theory that the immune system plays a significant role in mediating the heterogeneity of PD and its progression. Her lab is investigating this using blood and CSF -based immune markers, PET neuroimaging and neuropathology in stratified PD cohorts; and she is leading the first randomized controlled trial repurposing a peripheral immunosuppressive drug (azathioprine) to slow the progression of PD.
Estimating repetitive spatiotemporal patterns from resting-state brain activity data
Repetitive spatiotemporal patterns in resting-state brain activities have been widely observed in various species and regions, such as rat and cat visual cortices. Since they resemble the preceding brain activities during tasks, they are assumed to reflect past experiences embedded in neuronal circuits. Moreover, spatiotemporal patterns involving whole-brain activities may also reflect a process that integrates information distributed over the entire brain, such as motor and visual information. Therefore, revealing such patterns may elucidate how the information is integrated to generate consciousness. In this talk, I will introduce our proposed method to estimate repetitive spatiotemporal patterns from resting-state brain activity data and show the spatiotemporal patterns estimated from human resting-state magnetoencephalography (MEG) and electroencephalography (EEG) data. Our analyses suggest that the patterns involved whole-brain propagating activities that reflected a process to integrate the information distributed over frequencies and networks. I will also introduce our current attempt to reveal signal flows and their roles in the spatiotemporal patterns using a big dataset. - Takeda et al., Estimating repetitive spatiotemporal patterns from resting-state brain activity data. NeuroImage (2016); 133:251-65. - Takeda et al., Whole-brain propagating patterns in human resting-state brain activities. NeuroImage (2021); 245:118711.
The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures
What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.
Understanding and Mitigating Bias in Human & Machine Face Recognition
With the increasing use of automated face recognition (AFR) technologies, it is important to consider whether these systems not only perform accurately, but also equitability or without “bias”. Despite rising public, media, and scientific attention to this issue, the sources of bias in AFR are not fully understood. This talk will explore how human cognitive biases may impact our assessments of performance differentials in AFR systems and our subsequent use of those systems to make decisions. We’ll also show how, if we adjust our definition of what a “biased” AFR algorithm looks like, we may be able to create algorithms that optimize the performance of a human+algorithm team, not simply the algorithm itself.
Investigating semantics above and beyond language: a clinical and cognitive neuroscience approach
The ability to build, store, and manipulate semantic representations lies at the core of all our (inter)actions. Combining evidence from cognitive neuroimaging and experimental neuropsychology, I study the neurocognitive correlates of semantic knowledge in relation to other cognitive functions, chiefly language. In this talk, I will start by reviewing neuroimaging findings supporting the idea that semantic representations are encoded in distributed yet specialized cortical areas (1), and rapidly recovered (2) according to the requirement of the task at hand (3). I will then focus on studies conducted in neurodegenerative patients, offering a unique window on the key role played by a structurally and functionally heterogeneous piece of cortex: the anterior temporal lobe (4,5). I will present pathological, neuroimaging, cognitive, and behavioral data illustrating how damages to language-related networks can affect or spare semantic knowledge as well as possible paths to functional compensation (6,7). Time permitting, we will discuss the neurocognitive dissociation between nouns and verbs (8) and how verb production is differentially impacted by specific language impairments (9).
Mechanisms of relational structure mapping across analogy tasks
Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.
Microglial efferocytosis: Diving into the Alzheimer's Disease gene pool
Genome-wide association studies and functional genomics studies have linked specific cell types, genes, and pathways to Alzheimer’s disease (AD) risk. In particular, AD risk alleles primarily affect the abundance or structure, and thus the activity, of genes expressed in macrophages, strongly implicating microglia (the brain-resident macrophages) in the etiology of AD. These genes converge on pathways (endocytosis/phagocytosis, cholesterol metabolism, and immune response) with critical roles in core macrophage functions such as efferocytosis. Here, we review these pathways, highlighting relevant genes identified in the latest AD genetics and genomics studies, and describe how they may contribute to AD pathogenesis. Investigating the functional impact of AD-associated variants and genes in microglia is essential for elucidating disease risk mechanisms and developing effective therapeutic approaches." https://doi.org/10.1016/j.neuron.2022.10.015
Protocols for the social transfer of pain and analgesia in mice
We provide protocols for the social transfer of pain and analgesia in mice. We describe the steps to induce pain or analgesia (pain relief) in bystander mice with a 1-h social interaction with a partner injected with CFA (complete Freund’s adjuvant) or CFA and morphine, respectively. We detail behavioral tests to assess pain or analgesia in the untreated bystander mice. This protocol has been validated in mice and rats and can be used for investigating mechanisms of empathy. Highlights • A protocol for the rapid social transfer of pain in rodents • Detailed requirements for handling and housing conditions • Procedures for habituation, social interaction, and pain induction and assessment • Adaptable for social transfer of analgesia and may be used to study empathy in rodents https://doi.org/10.1016/j.xpro.2022.101756
Flexible selection of task-relevant features through population gating
Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within A1 but controlled by top-down inputs from mPFC, can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.
Training Dynamic Spiking Neural Network via Forward Propagation Through Time
With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.
Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity
A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.
Signal in the Noise: models of inter-trial and inter-subject neural variability
The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).
Navigating Increasing Levels of Relational Complexity: Perceptual, Analogical, and System Mappings
Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, while Experiment 2 additionally directed participants’ attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those above chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
An open-source miniature two-photon microscope for large-scale calcium imaging in freely moving mice
Due to the unsuitability of benchtop imaging for tasks that require unrestrained movement, investigators have tried, for almost two decades, to develop miniature 2P microscopes-2P miniscopes–that can be carried on the head of freely moving animals. In this talk, I would first briefly review the development history of this technique, and then report our latest progress on developing the new generation of 2P miniscopes, MINI2P, that overcomes the limits of previous versions by both meeting requirements for fatigue-free exploratory behavior during extended recording periods and satisfying demands for further increasing the cell yield by an order of magnitude, to thousands of neurons. The performance and reliability of MINI2P are validated by recordings of spatially tuned neurons in three brain regions and in three behavioral assays. All information about MINI2P is open access, with instruction videos, code, and manuals on public repositories, and workshops will be organized to help new users getting started. MINI2P permits large-scale and high-resolution calcium imaging in freely-moving mice, and opens the door to investigating brain functions during unconstrained natural behaviors.
Learning static and dynamic mappings with local self-supervised plasticity
Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.
Linking GWAS to pharmacological treatments for psychiatric disorders
Genome-wide association studies (GWAS) have identified multiple disease-associated genetic variations across different psychiatric disorders raising the question of how these genetic variants relate to the corresponding pharmacological treatments. In this talk, I will outline our work investigating whether functional information from a range of open bioinformatics datasets such as protein interaction network (PPI), brain eQTL, and gene expression pattern across the brain can uncover the relationship between GWAS-identified genetic variation and the genes targeted by current drugs for psychiatric disorders. Focusing on four psychiatric disorders---ADHD, bipolar disorder, schizophrenia, and major depressive disorder---we assess relationships between the gene targets of drug treatments and GWAS hits and show that while incorporating information derived from functional bioinformatics data, such as the PPI network and spatial gene expression, can reveal links for bipolar disorder, the overall correspondence between treatment targets and GWAS-implicated genes in psychiatric disorders rarely exceeds null expectations. This relatively low degree of correspondence across modalities suggests that the genetic mechanisms driving the risk for psychiatric disorders may be distinct from the pathophysiological mechanisms used for targeting symptom manifestations through pharmacological treatments and that novel approaches for understanding and treating psychiatric disorders may be required.
Investigating activity-dependent processes in cerebral cortex development and disease
The cerebral cortex contains an extraordinary diversity of excitatory projection neuron (PN) and inhibitory interneurons (IN), wired together to form complex circuits. Spatiotemporally coordinated execution of intrinsic molecular programs by PNs and INs and activity-dependent processes, contribute to cortical development and cortical microcircuits formation. Alterations of these delicate processes have often been associated to neurological/neurodevelopmental disorders. However, despite the groundbreaking discovery that spontaneous activity in the embryonic brain can shape regional identities of distinct cortical territories, it is still unclear whether this early activity contributes to define subtype-specific neuronal fate as well as circuit assembly. In this study, we combined in utero genetic perturbations via CRISPR/Cas9 system and pharmacological inhibition of selected ion channels with RNA-sequencing and live imaging technologies to identify the activity-regulated processes controlling the development of different cortical PN classes, their wiring and the acquisition of subtype specific features. Moreover, we generated human induced pluripotent stem cells (iPSCs) form patients affected by a severe, rare and untreatable form of developmental epileptic encephalopathy. By differentiating cortical organoids form patient-derived iPSCs we create human models of early electrical alterations for studying molecular, structural and functional consequences of the genetic mutations during cortical development. Our ultimate goal is to define the activity-conditioned processes that physiologically occur during the development of cortical circuits, to identify novel therapeutical paths to address the pathological consequences of neonatal epilepsies.
The glymphatic system in motor neurone disease
Neurodegenerative diseases are chronic and inexorable conditions characterised by the presence of insoluble aggregates of abnormally ubiquinated and phosphorylated proteins. Recent evidence also suggests that protein misfolding can propagate throughout the body in a prion-like fashion via the interstitial or cerebrospinal fluids (CSF). As protein aggregation occurs well before the onset of brain damage and symptoms, new biomarkers sensitive to early pathology, together with therapeutic strategies that include eliminating seed proteins and blocking cell-to-cell spread, are of vital importance. The glymphatic system, which facilitates the continuous exchange of CSF and interstitial fluid to clear the brain of waste, presents as a potential biomarker of disease severity, therapeutic target, and drug delivery system. In this webinar, Associate Professor David Wright from the Department of Neuroscience, Monash University, will outline recent advances in using MRI to investigate the glymphatic system. He will also present some of his lab’s recent work investigating glymphatic clearance in preclinical models of motor neurone disease. Associate Professor David Wright is an NHMRC Emerging Leadership Fellow and the Director of Preclinical Imaging in the Department of Neuroscience, Monash University and the Alfred Research Alliance, Alfred Health. His research encompasses the development, application and analysis of advanced magnetic resonance imaging techniques for the study of disease, with a particular emphasis on neurodegenerative disorders. Although less than three years post PhD, he has published over 60 peer-reviewed journal articles in leading neuroscience journals such as Nature Medicine, Brain, and Cerebral Cortex.
Light-induced moderations in vitality and sleep in the field
Retinal light exposure is modulated by our behavior, and light exposure patterns show strong variations within and between persons. Yet, most laboratory studies investigated influences of constant lighting settings on human daytime functioning and sleep. In this presentation, I will discuss a series of studies investigating light-induced moderations in sleepiness, vitality and sleep, with a strong focus on the temporal dynamics in these effects, and the bi-directional relation between persons' light profiles and their behavior.
Western diet consumption and memory impairment: what, when, and how?
Habitual consumption of a “Western diet”, containing higher than recommended levels of simple sugars and saturated fatty acids, is associated with cognitive impairments in humans and in various experimental animal models. Emerging findings reveal that the specific mnemonic processes that are disrupted by Western diet consumption are those that rely on the hippocampus, a brain region classically linked with memory control and more recently with the higher-order control of food intake. Our laboratory has established rat models in which excessive consumption of different components of a Western diet during the juvenile and adolescent periods of development yields long-term impairments in hippocampal-dependent memory function without concomitant increases in total caloric intake, body weight, or adiposity. Our ongoing work is investigating alterations in the gut microbiome as a potential underlying neurobiological mechanism linking early life unhealthy dietary factors to adverse neurocognitive outcomes.
Exploring mechanisms of human brain expansion in cerebral organoids
The human brain sets us apart as a species, with its size being one of its most striking features. Brain size is largely determined during development as vast numbers of neurons and supportive glia are generated. In an effort to better understand the events that determine the human brain’s cellular makeup, and its size, we use a human model system in a dish, called cerebral organoids. These 3D tissues are generated from pluripotent stem cells through neural differentiation and a supportive 3D microenvironment to generate organoids with the same tissue architecture as the early human fetal brain. Such organoids are allowing us to tackle questions previously impossible with more traditional approaches. Indeed, our recent findings provide insight into regulation of brain size and neuron number across ape species, identifying key stages of early neural stem cell expansion that set up a larger starting cell number to enable the production of increased numbers of neurons. We are also investigating the role of extrinsic regulators in determining numbers and types of neurons produced in the human cerebral cortex. Overall, our findings are pointing to key, human-specific aspects of brain development and function, that have important implications for neurological disease.
Genetic-based brain machine interfaces for visual restoration
Visual restoration is certainly the greatest challenge for brain-machine interfaces with the high pixel number and high refreshing rate. In the recent year, we brought retinal prostheses and optogenetic therapy up to successful clinical trials. Concerning visual restoration at the cortical level, prostheses have shown efficacy for limited periods of time and limited pixel numbers. We are investigating the potential of sonogenetics to develop a non-contact brain machine interface allowing long-lasting activation of the visual cortex. The presentation will introduce our genetic-based brain machine interfaces for visual restoration at the retinal and cortical levels.
MBI Webinar on preclinical research into brain tumours and neurodegenerative disorders
WEBINAR 1 Breaking the barrier: Using focused ultrasound for the development of targeted therapies for brain tumours presented by Dr Ekaterina (Caty) Salimova, Monash Biomedical Imaging Glioblastoma multiforme (GBM) - brain cancer - is aggressive and difficult to treat as systemic therapies are hindered by the blood-brain barrier (BBB). Focused ultrasound (FUS) - a non-invasive technique that can induce targeted temporary disruption of the BBB – is a promising tool to improve GBM treatments. In this webinar, Dr Ekaterina Salimova will discuss the MRI-guided FUS modality at MBI and her research to develop novel targeted therapies for brain tumours. Dr Ekaterina (Caty) Salimova is a Research Fellow in the Preclinical Team at Monash Biomedical Imaging. Her research interests include imaging cardiovascular disease and MRI-guided focused ultrasound for investigating new therapeutic targets in neuro-oncology. - WEBINAR 2 Disposition of the Kv1.3 inhibitory peptide HsTX1[R14A], a novel attenuator of neuroinflammation presented by Sanjeevini Babu Reddiar, Monash Institute of Pharmaceutical Sciences The voltage-gated potassium channel (Kv1.3) in microglia regulates membrane potential and pro-inflammatory functions, and non-selective blockade of Kv1.3 has shown anti-inflammatory and disease improvement in animal models of Alzheimer’s and Parkinson’s diseases. Therefore, specific inhibitors of pro-inflammatory microglial processes with CNS bioavailability are urgently needed, as disease-modifying treatments for neurodegenerative disorders are lacking. In this webinar, PhD candidate Ms Sanju Reddiar will discuss the synthesis and biodistribution of a Kv1.3-inhibitory peptide using a [64Cu]Cu-DOTA labelled conjugate. Sanjeevini Babu Reddiar is a PhD student at the Monash Institute of Pharmaceutical Sciences. She is working on a project identifying the factors governing the brain disposition and blood-brain barrier permeability of a Kv1.3-blocking peptide.
Intrinsic Rhythms in a Giant Single-Celled Organism and the Interplay with Time-Dependent Drive, Explored via Self-Organized Macroscopic Waves
Living Systems often seem to follow, in addition to external constraints and interactions, an intrinsic predictive model of the world — a defining trait of Anticipatory Systems. Here we study rhythmic behaviour in Caulerpa, a marine green alga, which appears to predict the day/night light cycle. Caulerpa consists of differentiated organs resembling leaves, stems and roots. While an individual can exceed a meter in size, it is a single multinucleated giant cell. Active transport has been hypothesized to play a key role in organismal development. It has been an open question in the literature whether rhythmic transport phenomena in this organism are of autonomous circadian nature. Using Raspberry-Pi cameras, we track over weeks the morphogenesis of tens of samples concurrently, while tracing at resolution of tens of seconds the variation of the green coverage. The latter reveals waves propagating over centimeters within few hours, and is attributed to chloroplast redistribution at whole-organism scale. Our observations of algal segments regenerating under 12-hour light/dark cycles indicate that the initiation of the waves precedes the external light change. Using time-frequency analysis, we find that the temporal spectrum of these green pulses contains a circadian period. The latter persists over days even under constant illumination, indicative of its autonomous nature. We further explore the system under non-circadian periods, to reveal how the spectral content changes in response. Time-keeping and synchronization are recurring themes in biological research at various levels of description — from subcellular components to ecological systems. We present a seemingly primitive living system that exhibits apparent anticipatory behaviour. This research offers quantitative constraints for theoretical frameworks of such systems.
The neuroscience of lifestyle interventions for mental health: the BrainPark approach
Our everyday behaviours, such as physical activity, sleep, diet, meditation, and social connections, have a potent impact on our mental health and the health of our brain. BrainPark is working to harness this power by developing lifestyle-based interventions for mental health and investigating how they do and don’t change the brain, and for whom they are most effective. In this webinar, Dr Rebecca Segrave and Dr Chao Suo will discuss BrainPark’s approach to developing lifestyle-based interventions to help people get better control of compulsive behaviours, and the multi-modality neuroimaging approaches they take to investigating outcomes. The webinar will explore two current BrainPark trials: 1. Conquering Compulsions - investigating the capacity of physical exercise and meditation to alter reward processing and help people get better control of a wide range of unhelpful habits, from drinking to eating to cleaning. 2. The Brain Exercise Addiction Trial (BEAT) - an NHMRC funded investigation into the capacity of physical exercise to reverse the brain harms caused by long-term heavy cannabis use. Dr Rebecca Segrave is Deputy Director and Head of Interventions Research at BrainPark, the David Winston Turner Senior Research Fellow within the Turner Institute for Brain and Mental Health, and an AHRPA registered Clinical Neuropsychologist. Dr Chao Suo is Head of Technology and Neuroimaging at BrainPark and a Research Fellow within the Turner Institute for Brain and Mental Health.
Leadership Support and Workplace Psychosocial Stressors
Research evidence indicates that psychosocial stressors such as work-life stress serves as a negative occupational exposure relating to poor health behaviors including smoking, poor food choices, low levels of exercise, and even decreased sleep time, as well as a number of chronic health outcomes. The association between work-life stress and adverse health behaviors and chronic health suggests that Occupational Health Psychology (OHP) interventions such as leadership support trainings may be helpful in mitigating effects of work-life stress and improving health, consistent with the Total Worker Health approach. This presentation will review workplace psychosocial stressors and leadership training approaches to reduces stress and improve health, highlighting a randomized controlled trial, the Military Employee Sleep and Health study.
Modulation of oligodendrocyte development and myelination by voltage-gated Ca++ channels
The oligodendrocyte generates CNS myelin, which is essential for normal nervous system function. Thus, investigating the regulatory and signaling mechanisms that control its differentiation and the production of myelin is relevant to our understanding of brain development and of adult pathologies such as multiple sclerosis. We have recently established that the activity of voltage-gated Ca++ channels is crucial for the adequate migration, proliferation and maturation of oligodendrocyte progenitor cells (OPCs). Furthermore, we have found that voltage-gated Ca++ channels that function in synaptic communication between neurons also mediate synaptic signaling between neurons and OPCs. Thus, we hypothesize that voltage-gated Ca++ channels are central components of OPC-neuronal synapses and are the principal ion channels mediating activity-dependent myelination.
How bilingualism modulates the neural mechanisms of selective attention
Learning and using multiple languages places considerable demands on our cognitive system, and has been shown to modulate the mechanisms of selective attention in both children and adults. Yet the nature of these adaptive changes is still not entirely clear. One possibility is that bilingualism boosts the capacity for selective attention; another is that it leads to a different distribution of this finite resource, aimed at supporting optimal performance under the increased processing demands. I will present a series of studies investigating the nature of modifications of selective attention in bilingualism. Using behavioural and neuroimaging techniques, our data confirm that bilingualism modifies the neural mechanisms of selective attention even in the absence of behavioural differences between monolinguals and bilinguals. They further suggest that, instead of enhanced attentional capacity, these neuroadaptive modifications appear to reflect its redistribution, arguably aimed at economising the available resources to support optimal behavioural performance.
Neural correlates of temporal processing in humans
Estimating intervals is essential for adaptive behavior and decision-making. Although several theoretical models have been proposed to explain how the brain keeps track of time, there is still no evidence toward a single one. It is often hard to compare different models due to their overlap in behavioral predictions. For this reason, several studies have looked for neural signatures of temporal processing using methods such as electrophysiological recordings (EEG). However, for this strategy to work, it is essential to have consistent EEG markers of temporal processing. In this talk, I'll present results from several studies investigating how temporal information is encoded in the EEG signal. Specifically, across different experiments, we have investigated whether different neural signatures of temporal processing (such as the CNV, the LPC, and early ERPs): 1. Depend on the task to be executed (whether or not it is a temporal task or different types of temporal tasks); 2. Are encoding the physical duration of an interval or how much longer/shorter an interval is relative to a reference. Lastly, I will discuss how these results are consistent with recent proposals that approximate temporal processing with decisional models.
Neural Codes for Natural Behaviors in Flying Bats
This talk will focus on the importance of using natural behaviors in neuroscience research – the “Natural Neuroscience” approach. I will illustrate this point by describing studies of neural codes for spatial behaviors and social behaviors, in flying bats – using wireless neurophysiology methods that we developed – and will highlight new neuronal representations that we discovered in animals navigating through 3D spaces, or in very large-scale environments, or engaged in social interactions. In particular, I will discuss: (1) A multi-scale neural code for very large environments, which we discovered in bats flying in a 200-meter long tunnel. This new type of neural code is fundamentally different from spatial codes reported in small environments – and we show theoretically that it is superior for representing very large spaces. (2) Rapid modulation of position × distance coding in the hippocampus during collision-avoidance behavior between two flying bats. This result provides a dramatic illustration of the extreme dynamism of the neural code. (3) Local-but-not-global order in 3D grid cells – a surprising experimental finding, which can be explained by a simple physics-inspired model, which successfully describes both 3D and 2D grids. These results strongly argue against many of the classical, geometrically-based models of grid cells. (4) I will also briefly describe new results on the social representation of other individuals in the hippocampus, in a highly social multi-animal setting. The lecture will propose that neuroscience experiments – in bats, rodents, monkeys or humans – should be conducted under evermore naturalistic conditions.
Sex, drugs, and bad choices: using rodent models to understand decision making
Nearly every aspect of life involves decisions between options that differ in both their expected rewards and the potential costs (such as delay to reward delivery or risk of harm) that accompany those rewards. The ability to choose adaptively when faced with such decisions is critical for well-being and overall quality of life. In neuropsychiatric conditions such as substance use disorders, however, decision making is often compromised, which can prolong and exacerbate their severity and co-morbidities. In this seminar, Dr. Setlow will discuss research in rodent models investigating behavioral and biological mechanisms of cost-benefit decision making. In particular, he will focus on factors (including sex) that contribute to differences in cost-benefit decision making across the population, how variability in decision making is related to substance use, and how substance use can produce long-lasting changes in decision preference.
Adaptive Deep Brain Stimulation: Investigational System Development at the Edge of Clinical Brain Computer Interfacing
Over the last few decades, the use of deep brain stimulation (DBS) to improve the treatment of those with neurological movement disorders represents a critical success story in the development of invasive neurotechnology and the promise of brain-computer interfaces (BCI) to improve the lives of those suffering from incurable neurological disorders. In the last decade, investigational devices capable of recording and streaming neural activity from chronically implanted therapeutic electrodes has supercharged research into clinical applications of BCI, enabling in-human studies investigating the use of adaptive stimulation algorithms to further enhance therapeutic outcomes and improve future device performance. In this talk, Dr. Herron will review ongoing clinical research efforts in the field of adaptive DBS systems and algorithms. This will include an overview of DBS in current clinical practice, the development of bidirectional clinical-use research platforms, ongoing algorithm evaluation efforts, a discussion of current adoption barriers to be addressed in future work.
A nonlinear shot noise model for calcium-based synaptic plasticity
Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.
Investigating genetic risk for psychiatric diseases in human neural cells
NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics
Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.
NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing
A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.
Investigating the functional single-cell biology of SynGAP1 pathways
Untitled Seminar
Laura Fenlon (Australia): Time shapes all brains: timing of a conserved transcriptional network underlies divergent cortical connectivity routes in mammalian brain development and evolution; Laurent Nguyen (Belgium): Regulation of cerebral cortex morphogenesis by migrating cells; Carol Ann Mason (USA): Wiring the eye to brain for binocular vision: lessons from the albino visual system. Thomas Perlmann (Sweden): Interrogating dopamine neuron development at the single cell level
Age-related dedifferentiation across representational levels and their relation to memory performance
Episodic memory performance decreases with advancing age. According to theoretical models, such memory decline might be a consequence of age-related reductions in the ability to form distinct neural representations of our past. In this talk, I want to present our new age-comparative fMRI study investigating age-related neural dedifferentiation across different representational levels. By combining univariate analyses and searchlight pattern similarity analyses, we found that older adults show reduced category selective processing in higher visual areas, less specific item representations in occipital regions and less stable item representations. Dedifferentiation on all these representational levels was related to memory performance, with item specificity being the strongest contributor. Overall, our results emphasize that age-related dedifferentiation can be observed across the entire cortical hierarchy which may selectively impair memory performance depending on the memory task.
Microbiota in the health of the nervous system and the response to stress
Microbes have shaped the evolution of eukaryotes and contribute significantly to the physiology and behavior of animals. Some of these traits are inherited by the progenies. Despite the vast importance of microbe-host communication, we still do not know how bacteria change short term traits or long-term decisions in individuals or communities. In this seminar I will present our work on how commensal and pathogenic bacteria impact specific neuronal phenotypes and decision making. The traits we specifically study are the degeneration and regeneration of neurons and survival behaviors in animals. We use the nematode Caenorhabditis elegans and its dietary bacteria as model organisms. Both nematode and bacteria are genetically tractable, simplifying the detection of specific molecules and their effect on measurable characteristics. To identify these molecules we analyze their genomes, transcriptomes and metabolomes, followed by functional in vivo validation. We found that specific bacterial RNAs and bacterially produced neurotransmitters are key to trigger a survival behavioral and neuronal protection respectively. While RNAs cause responses that lasts for many generations we are still investigating whether bacterial metabolites are capable of inducing long lasting phenotypic changes.
Learning the structure and investigating the geometry of complex networks
Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.
“Wasn’t there food around here?”: An Agent-based Model for Local Search in Drosophila
The ability to keep track of one’s location in space is a critical behavior for animals navigating to and from a salient location, and its computational basis is now beginning to be unraveled. Here, we tracked flies in a ring-shaped channel as they executed bouts of search triggered by optogenetic activation of sugar receptors. Unlike experiments in open field arenas, which produce highly tortuous search trajectories, our geometrically constrained paradigm enabled us to monitor flies’ decisions to move toward or away from the fictive food. Our results suggest that flies use path integration to remember the location of a food site even after it has disappeared, and flies can remember the location of a former food site even after walking around the arena one or more times. To determine the behavioral algorithms underlying Drosophila search, we developed multiple state transition models and found that flies likely accomplish path integration by combining odometry and compass navigation to keep track of their position relative to the fictive food. Our results indicate that whereas flies re-zero their path integrator at food when only one feeding site is present, they adjust their path integrator to a central location between sites when experiencing food at two or more locations. Together, this work provides a simple experimental paradigm and theoretical framework to advance investigations of the neural basis of path integration.
Investigating the role of recurrent connectivity in connectome-constrained and task-optimized models of the fruit fly’s motion pathway
Bernstein Conference 2024
Navigating through the Latent Spaces in Generative Models
Bernstein Conference 2024
Explainable Machine Learning Approach to Investigating Neural Bases of Brain State Classification
COSYNE 2022
Investigating effort and time sensitivities in rodents performing a treadmill-based foraging task
COSYNE 2022
Investigating effort and time sensitivities in rodents performing a treadmill-based foraging task
COSYNE 2022
Calcium imaging-based brain-computer interface for investigating long-term neuronal code dynamics
COSYNE 2023
Dynamic gating of perceptual flexibility by non-classically responsive cortical neurons
COSYNE 2023
Inhibitory gating of non-linear dendrites enables stable learning of assemblies without forgetting
COSYNE 2023
An optofluidic platform for interrogating chemosensory behavior and brainwide neural representation
COSYNE 2023
Behavioral mechanisms of cognitive control in jackdaws (Corvus monedula): Investigating attention and working memory
FENS Forum 2024
Is bigger always more? – Investigating developmental changes in non-symbolic number comparison
FENS Forum 2024
BrainTrawler Lite: Navigating through a multi-scale multi-modal gene transcriptomics data resource through a lightweight user interface
FENS Forum 2024
The calcium link uncovered: Investigating TRPC3 and SKCa channels interaction
FENS Forum 2024
Decoding the developmental vulnerability to psychiatric disorders: Investigating the sexual dimorphism and role of perineuronal nets in habenulo-interpeduncular-system-mediated susceptibility to anxiety
FENS Forum 2024
A deep-learning approach to quasi-instantaneous Markov modeling of ion channel gating
FENS Forum 2024
FreiControl: A cost-efficient, open-source system for investigating individual strategies in decision making of rodents
FENS Forum 2024
IL-13Rα1 and Parkinson's disease: Investigating pathological connections
FENS Forum 2024
Investigating mitochondrial stress signalling in single neurons
FENS Forum 2024
Investigating and modulating κ-opioid receptor G-protein signaling
FENS Forum 2024
Interrogating CDKL5 deficiency disorder using human iPSCs-derived cerebral organoids
FENS Forum 2024
Interrogating modulatory effects of CA2 on the persistence of CA1 plasticity in mice hippocampus
FENS Forum 2024
Investigating the acute impact of sweeteners sucralose and Ace-K on ATP production and mitochondrial respiration in the hypothalamic GT1-7 cell line challenged with increased glucose
FENS Forum 2024
Investigating a new tau aggregation inhibitor “RE01” in PS19 transgenic mice
FENS Forum 2024
Investigating the AMPK-MFF pathway to modulate mitochondrial dynamics as a target for neuroprotection
FENS Forum 2024
Investigating the association between the novel GAP-43 concentration with diffusion tensor imaging indices in Alzheimer's dementia continuum
FENS Forum 2024
Investigating behavioral variables underlying helping behavior in mice
FENS Forum 2024
Investigating behavioural and neural alterations in zebrafish seizure and epilepsy models
FENS Forum 2024
Investigating the cellular and molecular mechanisms of MAST1 mutations in cortical malformation
FENS Forum 2024
Investigating central BDNF expression in an anorexia nervosa-like mouse model: Implications for diagnosis and prognosis
FENS Forum 2024
Investigating changes in interneurons and perineuronal nets in a rodent model of alpha-synucleinopathy
FENS Forum 2024
Investigating design parameters for improved tissue integration in brain-computer-interface technology
FENS Forum 2024
Investigating the development of the GABAergic system using human brain organoids
FENS Forum 2024
Investigating the distribution of cocaine- and amphetamine-regulated transcript (CART) in the human spinal cord
FENS Forum 2024
Investigating the effect of meropenem on neurogenesis in adult rats
FENS Forum 2024
Investigating the effects of chronic aerobic exercise on cognitive deficits and inflammatory markers in the sub-chronic phencyclidine mouse model for schizophrenia research
FENS Forum 2024
Investigating the effects of GSK-3β inhibition on cognitive deficits in the sub-chronic phencyclidine model for cognitive impairment associated with schizophrenia
FENS Forum 2024
Investigating the efficiency of a mitochondria booster to improve anxiety-related behaviors: Accumbal metabolic and neurobiological mechanisms
FENS Forum 2024
Investigating functional consequences of astrocyte ingestion of synapses in an ex vivo model of Alzheimer’s disease-like synapse loss
FENS Forum 2024
Investigating the glucose transporter 2 positive cells in the medial prefrontal cortex and their association with posttraumatic stress disorder in a mice model
FENS Forum 2024
Investigating hippocampal synaptic plasticity in Schizophrenia: a computational and experimental approach using MEA recordings
Bernstein Conference 2024