connections
Latest
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
How the presynapse forms and functions”
Nervous system function relies on the polarized architecture of neurons, established by directional transport of pre- and postsynaptic cargoes. While delivery of postsynaptic components depends on the secretory pathway, the identity of the membrane compartment(s) that supply presynaptic active zone (AZ) and synaptic vesicle (SV) proteins is largely unknown. I will discuss our recent advances in our understanding of how key components of the presynaptic machinery for neurotransmitter release are transported and assembled focussing on our studies in genome-engineered human induced pluripotent stem cell-derived neurons. Specifically, I will focus on the composition and cell biological identity of the axonal transport vesicles that shuttle key components of neurotransmission to nascent synapses and on machinery for axonal transport and its control by signaling lipids. Our studies identify a crucial mechanism mediating the delivery of SV and active zone proteins to developing synapses and reveal connections to neurological disorders. In the second part of my talk, I will discuss how exocytosis and endocytosis are coupled to maintain presynaptic membrane homeostasis. I will present unpublished data regarding the role of membrane tension in the coupling of exocytosis and endocytosis at synapses. We have identified an endocytic BAR domain protein that is capable of sensing alterations in membrane tension caused by the exocytotic fusion of SVs to initiate compensatory endocytosis to restore plasma membrane area. Interference with this mechanism results in defects in the coupling of presynaptic exocytosis and SV recycling at human synapses.
Functional Plasticity in the Language Network – evidence from Neuroimaging and Neurostimulation
Efficient cognition requires flexible interactions between distributed neural networks in the human brain. These networks adapt to challenges by flexibly recruiting different regions and connections. In this talk, I will discuss how we study functional network plasticity and reorganization with combined neurostimulation and neuroimaging across the adult life span. I will argue that short-term plasticity enables flexible adaptation to challenges, via functional reorganization. My key hypothesis is that disruption of higher-level cognitive functions such as language can be compensated for by the recruitment of domain-general networks in our brain. Examples from healthy young brains illustrate how neurostimulation can be used to temporarily interfere with efficient processing, probing short-term network plasticity at the systems level. Examples from people with dyslexia help to better understand network disorders in the language domain and outline the potential of facilitatory neurostimulation for treatment. I will also discuss examples from aging brains where plasticity helps to compensate for loss of function. Finally, examples from lesioned brains after stroke provide insight into the brain’s potential for long-term reorganization and recovery of function. Collectively, these results challenge the view of a modular organization of the human brain and argue for a flexible redistribution of function via systems plasticity.
Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala
Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala. This study by Marios Abatis et al. demonstrates how fear conditioning strengthens synaptic connections between engram cells in the lateral amygdala, revealed through optogenetic identification of neuronal ensembles and electrophysiological measurements. The work provides crucial insights into memory formation mechanisms at the synaptic level, with implications for understanding anxiety disorders and developing targeted interventions. Presented by Dr. Kenneth Hayworth, this journal club will explore the paper's methodology linking engram cell reactivation with synaptic plasticity measurements, and discuss implications for memory decoding research.
Shaping connections through remote gene regulation
In the third of this year’s Brain Prize webinars, Oscar Marin (King's College London, UK), Leslie Griffith (Brandeis University, USA), and Kesley Martin (Simons Foundation, USA) will present their work on shaping connections through remote gene regulation. Each speaker will present for 25 minutes, and the webinar will conclude with an open discussion. The webinar will be moderated by the winners of the 2023 Brain Prize, Michael Greenberg, Erin Schuman and Christine Holt.
Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task
Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
Assigning credit through the "other” connectome
Learning in neural networks requires assigning the right values to thousands to trillions or more of individual connections, so that the network as a whole produces the desired behavior. Neuroscientists have gained insights into this “credit assignment” problem through decades of experimental, modeling, and theoretical studies. This has suggested key roles for synaptic eligibility traces and top-down feedback signals, among other factors. Here we study the potential contribution of another type of signaling that is being revealed in greater and greater fidelity by ongoing molecular and genomics studies. This is the set of modulatory pathways local to a given circuit, which form an intriguing second type of connectome overlayed on top of synaptic connectivity. We will share ongoing modeling and theoretical work that explores the possible roles of this local modulatory connectome in network learning.
Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity
During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.
Dynamics of cortical circuits: underlying mechanisms and computational implications
A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.
Bridging the gap between artificial models and cortical circuits
Artificial neural networks simplify complex biological circuits into tractable models for computational exploration and experimentation. However, the simplification of artificial models also undermines their applicability to real brain dynamics. Typical efforts to address this mismatch add complexity to increasingly unwieldy models. Here, we take a different approach; by reducing the complexity of a biological cortical culture, we aim to distil the essential factors of neuronal dynamics and plasticity. We leverage recent advances in growing neurons from human induced pluripotent stem cells (hiPSCs) to analyse ex vivo cortical cultures with only two distinct excitatory and inhibitory neuron populations. Over 6 weeks of development, we record from thousands of neurons using high-density microelectrode arrays (HD-MEAs) that allow access to individual neurons and the broader population dynamics. We compare these dynamics to two-population artificial networks of single-compartment neurons with random sparse connections and show that they produce similar dynamics. Specifically, our model captures the firing and bursting statistics of the cultures. Moreover, tightly integrating models and cultures allows us to evaluate the impact of changing architectures over weeks of development, with and without external stimuli. Broadly, the use of simplified cortical cultures enables us to use the repertoire of theoretical neuroscience techniques established over the past decades on artificial network models. Our approach of deriving neural networks from human cells also allows us, for the first time, to directly compare neural dynamics of disease and control. We found that cultures e.g. from epilepsy patients tended to have increasingly more avalanches of synchronous activity over weeks of development, in contrast to the control cultures. Next, we will test possible interventions, in silico and in vitro, in a drive for personalised approaches to medical care. This work starts bridging an important theoretical-experimental neuroscience gap for advancing our understanding of mammalian neuron dynamics.
Learning by Analogy in Mathematics
Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.
Nonlinear computations in spiking neural networks through multiplicative synapses
The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.
Lateral entorhinal cortex directly influences medial entorhinal cortex through synaptic connections in layer 1
Standard models of episodic memory suggest that lateral (LEC) and medial entorhinal cortex (MEC) send independent inputs to the hippocampus, each carrying different types of information. Here, we describe a pathway by which information is integrated between LEC and MEC prior to reaching hippocampus. We demonstrate that LEC sends strong projections to MEC arising from neurons that receive neocortical inputs. Activation of LEC inputs drives excitation of hippocampal-projecting neurons in MEC layer 2, typically followed by inhibition that is accounted for by parallel activation of local inhibitory neurons. We therefore propose that local circuits in MEC may support integration of ‘what’ and ‘where’ information.
Neural Circuit Mechanisms of Pattern Separation in the Dentate Gyrus
The ability to discriminate different sensory patterns by disentangling their neural representations is an important property of neural networks. While a variety of learning rules are known to be highly effective at fine-tuning synapses to achieve this, less is known about how different cell types in the brain can facilitate this process by providing architectural priors that bias the network towards sparse, selective, and discriminable representations. We studied this by simulating a neuronal network modelled on the dentate gyrus—an area characterised by sparse activity associated with pattern separation in spatial memory tasks. To test the contribution of different cell types to these functions, we presented the model with a wide dynamic range of input patterns and systematically added or removed different circuit elements. We found that recruiting feedback inhibition indirectly via recurrent excitatory neurons proved particularly helpful in disentangling patterns, and show that simple alignment principles for excitatory and inhibitory connections are a highly effective strategy.
Molecular Logic of Synapse Organization and Plasticity
Connections between nerve cells called synapses are the fundamental units of communication and information processing in the brain. The accurate wiring of neurons through synapses into neural networks or circuits is essential for brain organization. Neuronal networks are sculpted and refined throughout life by constant adjustment of the strength of synaptic communication by neuronal activity, a process known as synaptic plasticity. Deficits in the development or plasticity of synapses underlie various neuropsychiatric disorders, including autism, schizophrenia and intellectual disability. The Siddiqui lab research program comprises three major themes. One, to assess how biochemical switches control the activity of synapse organizing proteins, how these switches act through their binding partners and how these processes are regulated to correct impaired synaptic function in disease. Two, to investigate how synapse organizers regulate the specificity of neuronal circuit development and how defined circuits contribute to cognition and behaviour. Three, to address how synapses are formed in the developing brain and maintained in the mature brain and how microcircuits formed by synapses are refined to fine-tune information processing in the brain. Together, these studies have generated fundamental new knowledge about neuronal circuit development and plasticity and enabled us to identify targets for therapeutic intervention.
Malignant synaptic plasticity in pediatric high-grade gliomas
Pediatric high-grade gliomas (pHGG) are a devastating group of diseases that urgently require novel therapeutic options. We have previously demonstrated that pHGGs directly synapse onto neurons and the subsequent tumor cell depolarization, mediated by calcium-permeable AMPA channels, promotes their proliferation. The regulatory mechanisms governing these postsynaptic connections are unknown. Here, we investigated the role of BDNF-TrkB signaling in modulating the plasticity of the malignant synapse. BDNF ligand activation of its canonical receptor, TrkB (which is encoded for by the gene NTRK2), has been shown to be one important modulator of synaptic regulation in the normal setting. Electrophysiological recordings of glioma cell membrane properties, in response to acute neurotransmitter stimulation, demonstrate in an inward current resembling AMPA receptor (AMPAR) mediated excitatory neurotransmission. Extracellular BDNF increases the amplitude of this glutamate-induced tumor cell depolarization and this effect is abrogated in NTRK2 knockout glioma cells. Upon examining tumor cell excitability using in situ calcium imaging, we found that BDNF increases the intensity of glutamate-evoked calcium transients in GCaMP6s expressing glioma cells. Western blot analysis indicates the tumors AMPAR properties are altered downstream of BDNF induced TrkB activation in glioma. Cell membrane protein capture (via biotinylation) and live imaging of pH sensitive GFP-tagged AMPAR subunits demonstrate an increase of calcium permeable channels at the tumors postsynaptic membrane in response to BDNF. We find that BDNF-TrkB signaling promotes neuron-to-glioma synaptogenesis as measured by high-resolution confocal and electron microscopy in culture and tumor xenografts. Our analysis of published pHGG transcriptomic datasets, together with brain slice conditioned medium experiments in culture, indicates the tumor microenvironment as the chief source of BDNF ligand. Disruption of the BDNF-TrkB pathway in patient-derived orthotopic glioma xenograft models, both genetically and pharmacologically, results in an increased overall survival and reduced tumor proliferation rate. These findings suggest that gliomas leverage normal mechanisms of plasticity to modulate the excitatory channels involved in synaptic neurotransmission and they reveal the potential to target the regulatory components of glioma circuit dynamics as a therapeutic strategy for these lethal cancers.
Optogenetic dissection of local and long-range connections in prefrontal circuits
How are nervous systems remodeled in complex metazoans?
Early in development the nervous system is constructed with far too many neurons that make an excessive number of synaptic connections. Later, a wave of neuronal remodeling radically reshapes nervous system wiring and cell numbers through the selective elimination of excess synapses, axons and dendrites, and even whole neurons. This remodeling is widespread across the nervous system, extensive in terms of how much individual brain regions can change (e.g. in some cases 50% of neurons integrated into a brain circuit are eliminated), and thought to be essential for optimizing nervous system function. Perturbations of neuronal remodeling are thought to underlie devastating neurodevelopmental disorders including autism spectrum disorder, schizophrenia, and epilepsy. This seminar will discuss our efforts to use the relatively simple nervous system of Drosophila to understand the mechanistic basis by which cells, or parts of cells, are specified for removal and eliminated from the nervous system.
Hebbian Plasticity Supports Predictive Self-Supervised Learning of Disentangled Representations
Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections. Experience-dependent plasticity presumably exploits temporal contingencies between sensory inputs to build these internal representations. However, the precise mechanisms underlying plasticity remain elusive. We derive a local synaptic plasticity model inspired by self-supervised machine learning techniques that shares a deep conceptual connection to Bienenstock-Cooper-Munro (BCM) theory and is consistent with experimentally observed plasticity rules. We show that our plasticity model yields disentangled object representations in deep neural networks without the need for supervision and implausible negative examples. In response to altered visual experience, our model qualitatively captures neuronal selectivity changes observed in the monkey inferotemporal cortex in-vivo. Our work suggests a plausible learning rule to drive learning in sensory networks while making concrete testable predictions.
From a by-stander to an influencer: How microglia adapt to altered environments and influence neuronal activity
Microglia, traditionally classified as immune-responsive, adjust synaptic connections during development and disease. However, their role in the adult nervous system has been mostly diminished to an observer. In my research group, we are interested in how microglia are involved in establishing and maintaining accurate neuronal circuit function in the retina and in the visual cortex. In my talk, I will introduce our strategies how to decipher the microglia’s functional identity and how this information guided us to microglia enabled extracellular matrix remodeling and reinstatment of juvenile-like plasticity in the adult brain.
Learning binds novel inputs into functional synaptic clusters via spinogenesis
Learning is known to induce the formation of new dendritic spines, but despite decades of effort, the functional properties of new spines in vivo remain unknown. Here, using a combination of longitudinal in vivo 2-photon imaging of the glutamate reporter, iGluSnFR, and correlated electron microscopy (CLEM) of dendritic spines on the apical dendrites of L2/3 excitatory neurons in the motor cortex during motor learning, we describe a framework of new spines' formation, survival, and resulting function. Specifically, our data indicate that the potentiation of a subset of clustered, pre-existing spines showing task-related activity in early sessions of learning creates a micro-environment of plasticity within dendrites, wherein multiple filopodia sample the nearby neuropil, form connections with pre-existing boutons connected to allodendritic spines, and are then selected for survival based on co-activity with nearby task-related spines. Thus, the formation and survival of new spines is determined by the functional micro-environment of dendrites. After formation, new spines show preferential co-activation with nearby task-related spines. This synchronous activity is more specific to movements than activation of the individual spines in isolation, and further, is coincident with movements that are more similar to the learned pattern. Thus, new spines functionally engage with their parent clusters to signal the learned movement. Finally, by reconstructing the axons associated with new spines, we found that they synapse with axons previously unrepresented in these dendritic domains, suggesting that the strong local co-activity structure exhibited by new spines is likely not due to axon sharing. Thus, learning involves the binding of new information streams into functional synaptic clusters to subserve the learned behavior.
The neuroscience of lifestyle interventions for mental health: the BrainPark approach
Our everyday behaviours, such as physical activity, sleep, diet, meditation, and social connections, have a potent impact on our mental health and the health of our brain. BrainPark is working to harness this power by developing lifestyle-based interventions for mental health and investigating how they do and don’t change the brain, and for whom they are most effective. In this webinar, Dr Rebecca Segrave and Dr Chao Suo will discuss BrainPark’s approach to developing lifestyle-based interventions to help people get better control of compulsive behaviours, and the multi-modality neuroimaging approaches they take to investigating outcomes. The webinar will explore two current BrainPark trials: 1. Conquering Compulsions - investigating the capacity of physical exercise and meditation to alter reward processing and help people get better control of a wide range of unhelpful habits, from drinking to eating to cleaning. 2. The Brain Exercise Addiction Trial (BEAT) - an NHMRC funded investigation into the capacity of physical exercise to reverse the brain harms caused by long-term heavy cannabis use. Dr Rebecca Segrave is Deputy Director and Head of Interventions Research at BrainPark, the David Winston Turner Senior Research Fellow within the Turner Institute for Brain and Mental Health, and an AHRPA registered Clinical Neuropsychologist. Dr Chao Suo is Head of Technology and Neuroimaging at BrainPark and a Research Fellow within the Turner Institute for Brain and Mental Health.
Flexible motor sequence generation by thalamic control of cortical dynamics through low-rank connectivity perturbations
One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales to efficiently shape structured behaviors. I will present a model that clarifies how these complex computations could be performed in the mammalian brain, with an emphasis on the learning of an extendable library of autonomous motor motifs and the flexible stringing of these motifs in motor sequences. To build this model, we took advantage of the fact that the anatomy of the circuits involved is well known. Our results show how these architectural constraints lead to a principled understanding of how strategically positioned plastic connections located within motif-specific thalamocortical loops can interact with cortical dynamics that are shared across motifs to create an efficient form of modularity. This occurs because the cortical dynamics can be controlled by the activation of as few as one thalamic unit, which induces a low-rank perturbation of the cortical connectivity, and significantly expands the range of outputs that the network can produce. Finally, our results show that transitions between any motifs can be facilitated by a specific thalamic population that participates in preparing cortex for the execution of the next motif. Taken together, our model sheds light on the neural network mechanisms that can generate flexible sequencing of varied motor motifs.
Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics
Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.
Keeping your Brain in Balance: the Ups and Downs of Homeostatic Plasticity (virtual)
Our brains must generate and maintain stable activity patterns over decades of life, despite the dramatic changes in circuit connectivity and function induced by learning and experience-dependent plasticity. How do our brains acheive this balance between opposing need for plasticity and stability? Over the past two decades, we and others have uncovered a family of “homeostatic” negative feedback mechanisms that are theorized to stabilize overall brain activity while allowing specific connections to be reconfigured by experience. Here I discuss recent work in which we demonstrate that individual neocortical neurons in freely behaving animals indeed have a homeostatic activity set-point, to which they return in the face of perturbations. Intriguingly, this firing rate homeostasis is gated by sleep/wake states in a manner that depends on the direction of homeostatic regulation: upward-firing rate homeostasis occurs selectively during periods of active wake, while downward-firing rate homeostasis occurs selectively during periods of sleep, suggesting that an important function of sleep is to temporally segregate bidirectional plasticity. Finally, we show that firing rate homeostasis is compromised in an animal model of autism spectrum disorder. Together our findings suggest that loss of homeostatic plasticity in some neurological disorders may render central circuits unable to compensate for the normal perturbations induced by development and learning.
Frontal circuit specialisations for information search and decision making
During primate evolution, prefrontal cortex (PFC) expanded substantially relative to other cortical areas. The expansion of PFC circuits likely supported the increased cognitive abilities of humans and anthropoids to sample information about their environment, evaluate that information, plan, and decide between different courses of action. What quantities do these circuits compute as information is being sampled towards and a decision is being made? And how can they be related to anatomical specialisations within and across PFC? To address this, we recorded PFC activity during value-based decision making using single unit recording in non-human primates and magnetoencephalography in humans. At a macrocircuit level, we found that value correlates differ substantially across PFC subregions. They are heavily shaped by each subregion’s anatomical connections and by the decision-maker’s current locus of attention. At a microcircuit level, we found that the temporal evolution of value correlates can be predicted using cortical recurrent network models that temporally integrate incoming decision evidence. These models reflect the fact that PFC circuits are highly recurrent in nature and have synaptic properties that support persistent activity across temporally extended cognitive tasks. Our findings build upon recent work describing economic decision making as a process of attention-weighted evidence integration across time.
Input and target-selective plasticity in sensory neocortex during learning
Behavioral experience shapes neural circuits, adding and subtracting connections between neurons that will ultimately control sensation and perception. We are using natural sensory experience to uncover basic principles of information processing in the cerebral cortex, with a focus on how sensory learning can selectively alter synaptic strength. I will discuss recent findings that differentiate reinforcement learning from sensory experience, showing rapid and selective plasticity of thalamic and inhibitory synapses within primary sensory cortex.
A novel form of retinotopy in area V2 highlights location-dependent feature selectivity in the visual system
Topographic maps are a prominent feature of brain organization, reflecting local and large-scale representation of the sensory surface. Traditionally, such representations in early visual areas are conceived as retinotopic maps preserving ego-centric retinal spatial location while ensuring that other features of visual input are uniformly represented for every location in space. I will discuss our recent findings of a striking departure from this simple mapping in the secondary visual area (V2) of the tree shrew that is best described as a sinusoidal transformation of the visual field. This sinusoidal topography is ideal for achieving uniform coverage in an elongated area like V2 as predicted by mathematical models designed for wiring minimization, and provides a novel explanation for stripe-like patterns of intra-cortical connections and functional response properties in V2. Our findings suggest that cortical circuits flexibly implement solutions to sensory surface representation, with dramatic consequences for large-scale cortical organization. Furthermore our work challenges the framework of relatively independent encoding of location and features in the visual system, showing instead location-dependent feature sensitivity produced by specialized processing of different features in different spatial locations. In the second part of the talk, I will propose that location-dependent feature sensitivity is a fundamental organizing principle of the visual system that achieves efficient representation of positional regularities in visual input, and reflects the evolutionary selection of sensory and motor circuits to optimally represent behaviorally relevant information. The relevant papers can be found here: V2 retinotopy (Sedigh-Sarvestani et al. Neuron, 2021) Location-dependent feature sensitivity (Sedigh-Sarvestani et al. Under Review, 2022)
Why would we need Cognitive Science to develop better Collaborative Robots and AI Systems?
While classical industrial robots are mostly designed for repetitive tasks, assistive robots will be challenged by a variety of different tasks in close contact with humans. Hereby, learning through the direct interaction with humans provides a potentially powerful tool for an assistive robot to acquire new skills and to incorporate prior human knowledge during the exploration of novel tasks. Moreover, an intuitive interactive teaching process may allow non-programming experts to contribute to robotic skill learning and may help to increase acceptance of robotic systems in shared workspaces and everyday life. In this talk, I will discuss recent research I did on interactive robot skill learning and the remaining challenges on the route to human-centered teaching of assistive robots. In particular, I will also discuss potential connections and overlap with cognitive science. The presented work covers learning a library of probabilistic movement primitives from human demonstrations, intention aware adaptation of learned skills in shared workspaces, and multi-channel interactive reinforcement learning for sequential tasks.
Wiring Minimization of Deep Neural Networks Reveal Conditions in which Multiple Visuotopic Areas Emerge
The visual system is characterized by multiple mirrored visuotopic maps, with each repetition corresponding to a different visual area. In this work we explore whether such visuotopic organization can emerge as a result of minimizing the total wire length between neurons connected in a deep hierarchical network. Our results show that networks with purely feedforward connectivity typically result in a single visuotopic map, and in certain cases no visuotopic map emerges. However, when we modify the network by introducing lateral connections, with sufficient lateral connectivity among neurons within layers, multiple visuotopic maps emerge, where some connectivity motifs yield mirrored alternations of visuotopic maps–a signature of biological visual system areas. These results demonstrate that different connectivity profiles have different emergent organizations under the minimum total wire length hypothesis, and highlight that characterizing the large-scale spatial organizing of tuning properties in a biological system might also provide insights into the underlying connectivity.
NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks
Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.
NMC4 Keynote: An all-natural deep recurrent neural network architecture for flexible navigation
A wide variety of animals and some artificial agents can adapt their behavior to changing cues, contexts, and goals. But what neural network architectures support such behavioral flexibility? Agents with loosely structured network architectures and random connections can be trained over millions of trials to display flexibility in specific tasks, but many animals must adapt and learn with much less experience just to survive. Further, it has been challenging to understand how the structure of trained deep neural networks relates to their functional properties, an important objective for neuroscience. In my talk, I will use a combination of behavioral, physiological and connectomic evidence from the fly to make the case that the built-in modularity and structure of its networks incorporate key aspects of the animal’s ecological niche, enabling rapid flexibility by constraining learning to operate on a restricted parameter set. It is not unlikely that this is also a feature of many biological neural networks across other animals, large and small, and with and without vertebrae.
NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing
A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.
Entering the Loop: Strong and specific connections between retina and midbrain revealed by large-scale paired recordings
Networking—the key to success… especially in the brain
In our everyday lives, we form connections and build up social networks that allow us to function successfully as individuals and as a society. Our social networks tend to include well-connected individuals who link us to other groups of people that we might otherwise have limited access to. In addition, we are more likely to befriend individuals who a) live nearby and b) have mutual friends. Interestingly, neurons tend to do the same…until development is perturbed. Just like social networks, neuronal networks require highly connected hubs to elicit efficient communication at minimal cost (you can’t befriend everybody you meet, nor can every neuron wire with every other!). This talk will cover some of Alex’s work showing that microscopic (cellular scale) brain networks inferred from spontaneous activity show similar complex topology to that previously described in macroscopic human brain scans. The talk will also discuss what happens when neurodevelopment is disrupted in the case of a monogenic disorder called Rett Syndrome. This will include simulations of neuronal activity and the effects of manipulation of model parameters as well as what happens when we manipulate real developing networks using optogenetics. If functional development can be restored in atypical networks, this may have implications for treatment of neurodevelopmental disorders like Rett Syndrome.
Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models
The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.
What is the function of auditory cortex when it develops in the absence of acoustic input?
Cortical plasticity is the neural mechanism by which the cerebrum adapts itself to its environment, while at the same time making it vulnerable to impoverished sensory or developmental experiences. Like the visual system, auditory development passes through a series of sensitive periods in which circuits and connections are established and then refined by experience. Current research is expanding our understanding of cerebral processing and organization in the deaf. In the congenitally deaf, higher-order areas of "deaf" auditory cortex demonstrate significant crossmodal plasticity with neurons responding to visual and somatosensory stimuli. This crucial cerebral function results in compensatory plasticity. Not only can the remaining inputs reorganize to substitute for those lost, but this additional circuitry also confers enhanced abilities to the remaining systems. In this presentation we will review our present understanding of the structure and function of “deaf” auditory cortex using psychophysical, electrophysiological, and connectional anatomy approaches and consider how this knowledge informs our expectations of the capabilities of cochlear implants in the developing brain.
Bidirectionally connected cores in a mouse connectome: Towards extracting the brain subnetworks essential for consciousness
Where in the brain consciousness resides remains unclear. It has been suggested that the subnetworks supporting consciousness should be bidirectionally (recurrently) connected because both feed-forward and feedback processing are necessary for conscious experience. Accordingly, evaluating which subnetworks are bidirectionally connected and the strength of these connections would likely aid the identification of regions essential to consciousness. Here, we propose a method for hierarchically decomposing a network into cores with different strengths of bidirectional connection, as a means of revealing the structure of the complex brain network. We applied the method to a whole-brain mouse connectome. We found that cores with strong bidirectional connections consisted of regions presumably essential to consciousness (e.g., the isocortical and thalamic regions, and claustrum) and did not include regions presumably irrelevant to consciousness (e.g., cerebellum). Contrarily, we could not find such correspondence between cores and consciousness when we applied other simple methods which ignored bidirectionality. These findings suggest that our method provides a novel insight into the relation between bidirectional brain network structures and consciousness. Our recent preprint on this work is here: https://doi.org/10.1101/2021.07.12.452022.
Gap Junction Coupling between Photoreceptors
Simply put, the goal of my research is to describe the neuronal circuitry of the retina. The organization of the mammalian retina is certainly complex but it is not chaotic. Although there are many cell types, most adhere to a relatively constant morphology and they are distributed in non-random mosaics. Furthermore, each cell type ramifies at a characteristic depth in the retina and makes a stereotyped set of synaptic connections. In other words, these neurons form a series of local circuits across the retina. The next step is to identify the simplest and commonest of these repeating neural circuits. They are the building blocks of retinal function. If we think of it in this way, the retina is a fabulous model for the rest of the CNS. We are interested in identifying specific circuits and cell types that support the different functions of the retina. For example, there appear to be specific pathways for rod and cone mediated vision. Rods are used under low light conditions and rod circuitry is specialized for high sensitivity when photons are scarce (when you’re out camping, starlight). The hallmark of the rod-mediated system is monochromatic vision. In contrast, the cone circuits are specialized for high acuity and color vision under relatively bright or daylight conditions. Individual neurons may be filled with fluorescent dyes under visual control. This is achieved by impaling the cell with a glass microelectrode using a 3D micromanipulator. We are also interested in the diffusion of dye through coupled neuronal networks in the retina. The dye filled cells are also combined with antibody labeling to reveal neuronal connections and circuits. This triple-labeled material may be viewed and reconstructed in 3 dimensions by multi-channel confocal microscopy. We have our own confocal microscope facility in the department and timeslots are available to students in my lab.
The Challenge and Opportunities of Mapping Cortical Layer Activity and Connectivity with fMRI
In this talk I outline the technical challenges and current solutions to layer fMRI. Specifically, I describe our acquisition strategies for maximizing resolution, spatial coverage, time efficiency as well as, perhaps most importantly, vascular specificity. Novel applications from our group, including mapping feedforward and feedback connections to M1 during task and sensory input modulation and S1 during a sensory prediction task are be shown. Layer specific activity in dorsal lateral prefrontal cortex during a working memory task is also demonstrated. Additionally, I’ll show preliminary work on mapping whole brain layer-specific resting state connectivity and hierarchy.
Neuro-Immune Coupling: How the Immune System Sculpts Brain Circuitry
In this lecture, Dr Stevens will discuss recent work that implicates brain immune cells, called microglia, in sculpting of synaptic connections during development and their relevance to autism, schizophrenia and other brain disorders. Her recent work revealed a key role for microglia and a group of immune related molecules called complement in normal developmental synaptic pruning, a normal process required to establish precise brain wiring. Emerging evidence suggests aberrant regulation of this pruning pathway may contribute to synaptic and cognitive dysfunction in a host of brain disorders, including schizophrenia. Recent research has revealed that a person’s risk of schizophrenia is increased if they inherit specific variants in complement C4, gene plays a well-known role in the immune system but also helps sculpt developing synapses in the mouse visual system (Sekar et al., 2016). Together these findings may help explain known features of schizophrenia, including reduced numbers of synapses in key cortical regions and an adolescent age of onset that corresponds with developmentally timed waves of synaptic pruning in these regions. Stevens will discuss this and ongoing work to understand the mechanisms by which complement and microglia prune specific synapses in the brain. A deeper understanding of how these immune mechanisms mediate synaptic pruning may provide novel insight into how to protect synapses in autism and other brain disorders, including Alzheimer’s and Huntington’s Disease.
The role of the complement pathway in post-traumatic sleep disruption and epilepsy
While traumatic brain injury (TBI) acutely disrupts the cortex, most TBI-related disabilities reflect secondary injuries that accrue over time. The thalamus is a likely site of secondary damage because of its reciprocal connections with the cortex. Using a mouse model of mild cortical injury that does not directly damage subcortical structures (mTBI), we found a chronic increase in C1q expression specifically in the corticothalamic circuit. Increased C1q expression co-localized with neuron loss and chronic inflammation, and correlated with disruption in sleep spindles and emergence of epileptic activities. Blocking C1q counteracted these outcomes, suggesting that C1q is a disease modifier in mTBI. Single-nucleus RNA sequencing demonstrated that microglia are the source of thalamic C1q. Since the corticothalamic circuit is important for cognition and sleep, which can be impaired by TBI, this circuit could be a new target for treating TBI-related disabilities
Generative models of the human connectome
The human brain is a complex network of neuronal connections. The precise arrangement of these connections, otherwise known as the topology of the network, is crucial to its functioning. Recent efforts to understand how the complex topology of the brain has emerged have used generative mathematical models, which grow synthetic networks according to specific wiring rules. Evidence suggests that a wiring rule which emulates a trade-off between connection costs and functional benefits can produce networks that capture essential topological properties of brain networks. In this webinar, Professor Alex Fornito and Dr Stuart Oldham will discuss these previous findings, as well as their own efforts in creating more physiologically constrained generative models. Professor Alex Fornito is Head of the Brain Mapping and Modelling Research Program at the Turner Institute for Brain and Mental Health. His research focuses on developing new imaging techniques for mapping human brain connectivity and applying these methods to shed light on brain function in health and disease. Dr Stuart Oldham is a Research Fellow at the Turner Institute for Brain and Mental Health and a Research Officer at the Murdoch Children’s Research Institute. He is interested in characterising the organisation of human brain networks, with particular focus on how this organisation develops, using neuroimaging and computational tools.
Advances in Computational Psychiatry: Understanding (cognitive) control as a network process
The human brain is a complex organ characterized by heterogeneous patterns of interconnections. Non-invasive imaging techniques now allow for these patterns to be carefully and comprehensively mapped in individual humans, paving the way for a better understanding of how wiring supports cognitive processes. While a large body of work now focuses on descriptive statistics to characterize these wiring patterns, a critical open question lies in how the organization of these networks constrains the potential repertoire of brain dynamics. In this talk, I will describe an approach for understanding how perturbations to brain dynamics propagate through complex wiring patterns, driving the brain into new states of activity. Drawing on a range of disciplinary tools – from graph theory to network control theory and optimization – I will identify control points in brain networks and characterize trajectories of brain activity states following perturbation to those points. Finally, I will describe how these computational tools and approaches can be used to better understand the brain's intrinsic control mechanisms and their alterations in psychiatric conditions.
A fresh look at the bird retina
I am working on the vertebrate retina, with a main focus on the mouse and bird retina. Currently my work is focused on three major topics: Functional and molecular analysis of electrical synapses in the retina Circuitry and functional role of retinal interneurons: horizontal cells Circuitry for light-dependent magnetoreception in the bird retina Electrical synapses Electrical synapses (gap junctions) permit fast transmission of electrical signals and passage of metabolites by means of channels, which directly connect the cytoplasm of adjoining cells. A functional gap junction channel consists of two hemichannels (one provided by each of the cells), each comprised of a set of six protein subunits, termed connexins. These building blocks exist in a variety of different subtypes, and the connexin composition determines permeability and gating properties of a gap junction channel, thereby enabling electrical synapses to meet a diversity of physiological requirements. In the retina, various connexins are expressed in different cell types. We study the cellular distribution of different connexins as well as the modulation induced by transmitter action or change of ambient light levels, which leads to altered electrical coupling properties. We are also interested in exploiting them as therapeutic avenue for retinal degeneration diseases. Horizontal cells Horizontal cells receive excitatory input from photoreceptors and provide feedback inhibition to photoreceptors and feedforward inhibition to bipolar cells. Because of strong electrical coupling horizontal cells integrate the photoreceptor input over a wide area and are thought to contribute to the antagonistic organization of bipolar cell and ganglion cell receptive fields and to tune the photoreceptor–bipolar cell synapse with respect to the ambient light conditions. However, the extent to which this influence shapes retinal output is unclear, and we aim to elucidate the functional importance of horizontal cells for retinal signal processing by studying various transgenic mouse models. Retinal circuitry for light-dependent magnetoreception in the bird We are studying which neuronal cell types and pathways in the bird retina are involved in the processing of magnetic signals. Likely, magnetic information is detected in cryptochrome-expressing photoreceptors and leaves the retina through ganglion cell axons that project via the thalamofugal pathway to Cluster N, a part of the visual wulst essential for the avian magnetic compass. Thus, we aim to elucidate the synaptic connections and retinal signaling pathways from putatively magnetosensitive photoreceptors to thalamus-projecting ganglion cells in migratory birds using neuroanatomical and electrophysiological techniques.
Thalamocortical circuits from neuroanatomy to mental representations
In highly volatile environments, performing actions that address current needs and desires is an ongoing challenge for living organisms. For example, the predictive value of environmental signals needs to be updated when predicted and actual outcomes differ. Furthermore, organisms also need to gain control over the environment through actions that are expected to produce specific outcomes. The data to be presented will show that these processes are highly reliant on thalamocortical circuits wherein thalamic nuclei make a critical contribution to adaptive decision-making, challenging the view that the thalamus only acts as a relay station for the cortical stage. Over the past few years, our work has highlighted the specific contribution of multiple thalamic nuclei in the ability to update the predictive link between events or the causal link between actions and their outcomes via the combination of targeted thalamic interventions (lesion, chemogenetics, disconnections) with behavioral procedures rooted in experimental psychology. We argue that several features of thalamocortical architecture are consistent with a prominent role for thalamic nuclei in shaping mental representations.
Frontal circuit specialisations for decision making
During primate evolution, prefrontal cortex (PFC) expanded substantially relative to other cortical areas. The expansion of PFC circuits likely supported the increased cognitive abilities of humans and anthropoids to plan, evaluate, and decide between different courses of action. But what do these circuits compute as a decision is being made, and how can they be related to anatomical specialisations within and across PFC? To address this, we recorded PFC activity during value-based decision making using single unit recording in non-human primates and magnetoencephalography in humans. At a macrocircuit level, we found that value correlates differ substantially across PFC subregions. They are heavily shaped by each subregion’s anatomical connections and by the decision-maker’s current locus of attention. At a microcircuit level, we found that the temporal evolution of value correlates can be predicted using cortical recurrent network models that temporally integrate incoming decision evidence. These models reflect the fact that PFC circuits are highly recurrent in nature and have synaptic properties that support persistent activity across temporally extended cognitive tasks. Our findings build upon recent work describing economic decision making as a process of attention-weighted evidence integration across time.
Neural mechanisms of navigation behavior
The regions of the insect brain devoted to spatial navigation are beautifully orderly, with a remarkably precise pattern of synaptic connections. Thus, we can learn much about the neural mechanisms of spatial navigation by targeting identifiable neurons in these networks for in vivo patch clamp recording and calcium imaging. Our lab has recently discovered that the "compass system" in the Drosophila brain is anchored to not only visual landmarks, but also the prevailing wind direction. Moreover, we found that the compass system can re-learn the relationship between these external sensory cues and internal self-motion cues, via rapid associative synaptic plasticity. Postsynaptic to compass neurons, we found neurons that conjunctively encode heading direction and body-centric translational velocity. We then showed how this representation of travel velocity is transformed from body- to world-centric coordinates at the subsequent layer of the network, two synapses downstream from compass neurons. By integrating this world-centric vector-velocity representation over time, it should be possible for the brain to form a stored representation of the body's path through the environment.
A theory for Hebbian learning in recurrent E-I networks
The Stabilized Supralinear Network is a model of recurrently connected excitatory (E) and inhibitory (I) neurons with a supralinear input-output relation. It can explain cortical computations such as response normalization and inhibitory stabilization. However, the network's connectivity is designed by hand, based on experimental measurements. How the recurrent synaptic weights can be learned from the sensory input statistics in a biologically plausible way is unknown. Earlier theoretical work on plasticity focused on single neurons and the balance of excitation and inhibition but did not consider the simultaneous plasticity of recurrent synapses and the formation of receptive fields. Here we present a recurrent E-I network model where all synaptic connections are simultaneously plastic, and E neurons self-stabilize by recruiting co-tuned inhibition. Motivated by experimental results, we employ a local Hebbian plasticity rule with multiplicative normalization for E and I synapses. We develop a theoretical framework that explains how plasticity enables inhibition balanced excitatory receptive fields that match experimental results. We show analytically that sufficiently strong inhibition allows neurons' receptive fields to decorrelate and distribute themselves across the stimulus space. For strong recurrent excitation, the network becomes stabilized by inhibition, which prevents unconstrained self-excitation. In this regime, external inputs integrate sublinearly. As in the Stabilized Supralinear Network, this results in response normalization and winner-takes-all dynamics: when two competing stimuli are presented, the network response is dominated by the stronger stimulus while the weaker stimulus is suppressed. In summary, we present a biologically plausible theoretical framework to model plasticity in fully plastic recurrent E-I networks. While the connectivity is derived from the sensory input statistics, the circuit performs meaningful computations. Our work provides a mathematical framework of plasticity in recurrent networks, which has previously only been studied numerically and can serve as the basis for a new generation of brain-inspired unsupervised machine learning algorithms.
In-vivo dynamical effects of structural white matter disconnections
Bernstein Conference 2024
Short-Distance Connections Enhance Neural Network Dynamics
Bernstein Conference 2024
Identifying key structural connections from functional response data: theory & applications
COSYNE 2022
Identifying key structural connections from functional response data: theory & applications
COSYNE 2022
Large-scale paired recordings reveal strong and specific connections between retina and midbrain.
COSYNE 2022
Large-scale paired recordings reveal strong and specific connections between retina and midbrain.
COSYNE 2022
Recurrent suppression in visual cortex explained by a balanced network with sparse synaptic connections
COSYNE 2022
Recurrent suppression in visual cortex explained by a balanced network with sparse synaptic connections
COSYNE 2022
Distributing task-related neural activity across a cortical network through task-independent connections
COSYNE 2023
A biologically-plausible learning rule using reciprocal feedback connections
COSYNE 2025
Distinct claustrum-cortex connections are involved in cognitive control performance and habitual sleep in humans
FENS Forum 2024
Dynamical complexity in engineered biological neuronal networks with directional and modular connections
FENS Forum 2024
The effects of associative learning on neuronal activity and functional connections in the mouse brain resting state networks
FENS Forum 2024
An electrodiffusive network model with multicompartmental neurons and synaptic connections
FENS Forum 2024
Harmonic oscillator RNNs: Single node dynamics, resonance and the role of feedback connections
FENS Forum 2024
IL-13Rα1 and Parkinson's disease: Investigating pathological connections
FENS Forum 2024
Large-scale cortical reorganization in the premotor-parietal connections of a macaque model with primary motor cortical lesion and recovery
FENS Forum 2024
Mapping the neural circuitry: LC-NE neurons and their connections to VTA and Raphe nuclei
FENS Forum 2024
Maternally activated connections of the ventral lateral septum reveal input from the posterior intralaminar thalamus
FENS Forum 2024
MK-801 effect on Negr1-deficient mouse metabolomics and potential connections to the kynurenine pathway
FENS Forum 2024
Modeling the non-linear spatial integration in V1 supragranular layers through asymmetric horizontal connections
FENS Forum 2024
Novel nanoscale cellular connections between vascular endothelial cells and perivascular glia and between neurons and glia in the developing brain revealed by 3D-EM
FENS Forum 2024
Optical recording of unitary synaptic connections between CA3 pyramidal cells using Voltron imaging
FENS Forum 2024
Orexin knockout mice have compromised orientation discrimination and display reduced AMPAR-mediated excitation in L4-2/3 connections in the primary visual cortex
FENS Forum 2024
The role of local and long-range mPFC connections in the consolidation of memories
FENS Forum 2024
Specialized corticothalamic connections between the layer 5 of the frontal cortex and the thalamus
FENS Forum 2024
Top-down connections from ACC to V1 contribute to mismatch negativity
FENS Forum 2024
Untangling the connections between the heart and brain in larval zebrafish
FENS Forum 2024
connections coverage
78 items