Tp
TP
Open SPM: A Modular Framework for Scanning Probe Microscopy
OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.
Recent views on pre-registration
A discussion on some recent perspectives on pre-registration, which has become a growing trend in the past few years. This is not just limited to neuroimaging, and it applies to most scientific fields. We will start with this overview editorial by Simmons et al. (2021): https://faculty.wharton.upenn.edu/wp-content/uploads/2016/11/34-Simmons-Nelson-Simonsohn-2021a.pdf, and also talk about a more critical perspective by Pham & Oh (2021): https://www.researchgate.net/profile/Michel-Pham/publication/349545600_Preregistration_Is_Neither_Sufficient_nor_Necessary_for_Good_Science/links/60fb311e2bf3553b29096aa7/Preregistration-Is-Neither-Sufficient-nor-Necessary-for-Good-Science.pdf. I would like us to discuss the pros and cons of pre-registration, and if we have time, I may do a demonstration of how to perform a pre-registration through the Open Science Framework.
Neurosurgery & Consciousness: Bridging Science and Philosophy in the Age of AI
Overview of neurosurgery specialty interplay between neurology, psychiatry and neurosurgery. Discussion on benefits and disadvantages of classifications. Presentation of sub-specialties: trauma, oncology, functional, pediatric, vascular and spine. How does an ordinary day of a neurosurgeon look like; outpatient clinic, emergencies, pre/intra/post operative patient care. An ordinary operation. Myth-busting and practical insights of every day practice. An ordinary operation. Hint for research on clinical problems to be solved. The coming ethical frontiers of neuroprosthetics. In part two we will explore the explanatory gap and its significance. We will review the more than 200 theories of the hard problem of consciousness, from the prevailing to the unconventional. Finally, we are going to reflect on the AI advancements and the claims of LLMs becoming conscious
Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging
Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.
Enhancing Real-World Event Memory
Memory is essential for shaping how we interpret the world, plan for the future, and understand ourselves, yet effective cognitive interventions for real-world episodic memory loss remain scarce. This talk introduces HippoCamera, a smartphone-based intervention inspired by how the brain supports memory, designed to enhance real-world episodic recollection by replaying high-fidelity autobiographical cues. It will showcase how our approach improves memory, mood, and hippocampal activity while uncovering links between memory distinctiveness, well-being, and the perception of time.
Towards open meta-research in neuroimaging
When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/
Llama 3.1 Paper: The Llama Family of Models
Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.
Trends in NeuroAI - Brain-like topography in transformers (Topoformer)
Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Improving Language Understanding by Generative Pre Training
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).
Analogy and Law
Abstracts: https://sites.google.com/site/analogylist/analogical-minds-seminar/analogy-and-law-symposium
Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)
Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss a recent paper by Taylor et al. (2023): https://www.sciencedirect.com/science/article/pii/S1053811923002896. They discuss the merits of highlighting results instead of hiding them; that is, clearly marking which voxels and clusters pass a given significance threshold, but still highlighting sub-threshold results, with opacity proportional to the strength of the effect. They use this to illustrate how there in fact may be more agreement between researchers than previously thought, using the NARPS dataset as an example. By adopting a continuous, "highlighted" approach, it becomes clear that the majority of effects are in the same location and that the effect size is in the same direction, compared to an approach that only permits rejecting or not rejecting the null hypothesis. We will also talk about the implications of this approach for creating figures, detecting artifacts, and aiding reproducibility.
BrainLM Journal Club
Connor Lane will lead a journal club on the recent BrainLM preprint, a foundation model for fMRI trained using self-supervised masked autoencoder training. Preprint: https://www.biorxiv.org/content/10.1101/2023.09.12.557460v1 Tweeprint: https://twitter.com/david_van_dijk/status/1702336882301112631?t=Q2-U92-BpJUBh9C35iUbUA&s=19
Sex hormone regulation of neural gene expression
Gonadal steroid hormones are the principal drivers of sex-variable biology in vertebrates. In the brain, estrogen (17β-estradiol) establishes neural sex differences in many species and modulates mood, behavior, and energy balance in adulthood. To understand the diverse effects of estradiol on the brain, we profiled the genomic binding of estrogen receptor alpha (ERα), providing the first picture of the neural actions of any gonadal hormone receptor. To relate ERα target genes to brain sex differences we assessed gene expression and chromatin accessibility in the posterior bed nucleus of the stria terminalis (BNSTp), a sexually dimorphic node in limbic circuitry that underlies sex-differential social behaviors such as aggression and parenting. In adult animals we observe that levels of ERα are predictive of the extent of sex-variable gene expression, and that these sex differences are a dynamic readout of acute hormonal state. In neonates we find that transient ERα recruitment at birth leads to persistent chromatin opening and male-biased gene expression, demonstrating a true epigenetic mechanism for brain sexual differentiation. Collectively, our findings demonstrate that sex differences in gene expression in the brain are a readout of state-dependent hormone receptor actions, rather than other factors such as sex chromosomes. We anticipate that the ERα targets we have found will contribute to established sex differences in the incidence and etiology of neurological and psychiatric disorders.
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss this paper on Neuroquery, a relatively new web-based meta-analysis tool: https://elifesciences.org/articles/53385.pdf. This is different from Neurosynth in that it generates meta-analysis maps using predictive modeling from the string of text provided at the prompt, instead of performing inferential statistics to calculate the overlap of activation from different studies. This allows the user to generate predictive maps for more nuanced cognitive processes - especially for clinical populations which may be underrepresented in the literature compared to controls - and can be useful in generating predictions about where the activity will be for one's own study, and for creating ROIs.
Adaptive deep brain stimulation to treat gait disorders in Parkinson's disease; Personalized chronic adaptive deep brain stimulation outperforms conventional stimulation in Parkinson's disease
On Friday, August 31st we will host Stephanie Cernera & Doris Wang! Stephanie Cernera, PhD, is a postdoctoral research fellow in the Starr lab at University of California San Francisco. She will tell us about “Personalized chronic adaptive deep brain stimulation outperforms conventional stimulation in Parkinson’s Disease”. Doris Wang, MD, PhD, is a neurosurgeon and assistant professor at the University of California San Francisco. Apart from her scientific presentation about “Adaptive Deep Brain Stimulation to Treat Gait Disorders in Parkinson’s Disease”, she will give us a glimpse at the “Person behind the science”. The talks will be followed by a shared discussion. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!
Sleep deprivation and the human brain: from brain physiology to cognition”
Sleep strongly affects synaptic strength, making it critical for cognition, especially learning and memory formation. Whether and how sleep deprivation modulates human brain physiology and cognition is poorly understood. Here we examined how overnight sleep deprivation vs overnight sufficient sleep affects (a) cortical excitability, measured by transcranial magnetic stimulation, (b) inducibility of long-term potentiation (LTP)- and long-term depression (LTD)-like plasticity via transcranial direct current stimulation (tDCS), and (c) learning, memory, and attention. We found that sleep deprivation increases cortical excitability due to enhanced glutamate-related cortical facilitation and decreases and/or reverses GABAergic cortical inhibition. Furthermore, tDCS-induced LTP-like plasticity (anodal) abolishes while the inhibitory LTD-like plasticity (cathodal) converts to excitatory LTP-like plasticity under sleep deprivation. This is associated with increased EEG theta oscillations due to sleep pressure. Motor learning, behavioral counterparts of plasticity, and working memory and attention, which rely on cortical excitability, are also impaired during sleep deprivation. Our study indicates that upscaled brain excitability and altered plasticity, due to sleep deprivation, are associated with impaired cognitive performance. Besides showing how brain physiology and cognition undergo changes (from neurophysiology to higher-order cognition) under sleep pressure, the findings have implications for variability and optimal application of noninvasive brain stimulation.
Algonauts 2023 winning paper journal club (fMRI encoding models)
Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf
1.8 billion regressions to predict fMRI (journal club)
Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.
OpenSFDI: an open hardware project for label-free measurements of tissue optical properties with spatial frequency domain imaging
Spatial frequency domain imaging (SFDI) is a diffuse optical measurement technique that can quantify tissue optical absorption and reduced scattering on a pixel by-pixel basis. Measurements of absorption at different wavelengths enable the extraction of molar concentrations of tissue chromophores over a wide field, providing a noncontact and label-free means to assess tissue viability, oxygenation, microarchitecture, and molecular content. In this talk, I will describe openSFDI, an open-source guide for building a low-cost, small-footprint, multi-wavelength SFDI system capable of quantifying absorption and reduced scattering as well as oxyhemoglobin and deoxyhemoglobin concentrations in biological tissue. The openSFDI project has a companion website which provides a complete parts list along with detailed instructions for assembling the openSFDI system. I will also review several technological advances our lab has recently made, including the extension of SFDI to the shortwave infrared wavelength band (900-1300 nm), where water and lipids provide strong contrast. Finally, I will discuss several preclinical and clinical applications for SFDI, including applications related to cancer, dermatology, rheumatology, cardiovascular disease, and others.
Movement planning as a window into hierarchical motor control
The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.
NOTE: DUE TO A CYBER ATTACK OUR UNIVERSITY WEB SYSTEM IS SHUT DOWN - TALK WILL BE RESCHEDULED
The size and structure of the dendritic arbor play important roles in determining how synaptic inputs of neurons are converted to action potential output and how neurons are integrated in the surrounding neuronal network. Accordingly, neurons with aberrant morphology have been associated with neurological disorders. Dysmorphic, enlarged neurons are, for example, a hallmark of focal epileptogenic lesions like focal cortical dysplasia (FCDIIb) and gangliogliomas (GG). However, the regulatory mechanisms governing the development of dendrites are insufficiently understood. The evolutionary conserved Ste20/Hippo kinase pathway has been proposed to play an important role in regulating the formation and maintenance of dendritic architecture. A key element of this pathway, Ste20-like kinase (SLK), regulates cytoskeletal dynamics in non-neuronal cells and is strongly expressed throughout neuronal development. Nevertheless, its function in neurons is unknown. We found that during development of mouse cortical neurons, SLK has a surprisingly specific role for proper elaboration of higher, ≥ 3rd, order dendrites both in cultured neurons and living mice. Moreover, SLK is required to maintain excitation-inhibition balance. Specifically, SLK knockdown causes a selective loss of inhibitory synapses and functional inhibition after postnatal day 15, while excitatory neurotransmission is unaffected. This mechanism may be relevant for human disease, as dysmorphic neurons within human cortical malformations exhibit significant loss of SLK expression. To uncover the signaling cascades underlying the action of SLK, we combined phosphoproteomics, protein interaction screens and single cell RNA seq. Overall, our data identifies SLK as a key regulator of both dendritic complexity during development and of inhibitory synapse maintenance.
The role of sub-population structure in computations through neural dynamics
Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.
Feedback control in the nervous system: from cells and circuits to behaviour
The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.
Precise spatio-temporal spike patterns in cortex and model
The cell assembly hypothesis postulates that groups of coordinated neurons form the basis of information processing. Here, we test this hypothesis by analyzing massively parallel spiking activity recorded in monkey motor cortex during a reach-to-grasp experiment for the presence of significant ms-precise spatio-temporal spike patterns (STPs). For this purpose, the parallel spike trains were analyzed for STPs by the SPADE method (Stella et al, 2019, Biosystems), which detects, counts and evaluates spike patterns for their significance by the use of surrogates (Stella et al, 2022 eNeuro). As a result we find STPs in 19/20 data sets (each of 15min) from two monkeys, but only a small fraction of the recorded neurons are involved in STPs. To consider the different behavioral states during the task, we analyzed the data in a quasi time-resolved analysis by dividing the data into behaviorally relevant time epochs. The STPs that occur in the various epochs are specific to behavioral context - in terms of neurons involved and temporal lags between the spikes of the STP. Furthermore we find, that the STPs often share individual neurons across epochs. Since we interprete the occurrence of a particular STP as the signature of a particular active cell assembly, our interpretation is that the neurons multiplex their cell assembly membership. In a related study, we model these findings by networks with embedded synfire chains (Kleinjohann et al, 2022, bioRxiv 2022.08.02.502431).
Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity
During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.
Developmentally structured coactivity in the hippocampal trisynaptic loop
The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.
Integration of 3D human stem cell models derived from post-mortem tissue and statistical genomics to guide schizophrenia therapeutic development
Schizophrenia is a neuropsychiatric disorder characterized by positive symptoms (such as hallucinations and delusions), negative symptoms (such as avolition and withdrawal) and cognitive dysfunction1. Schizophrenia is highly heritable, and genetic studies are playing a pivotal role in identifying potential biomarkers and causal disease mechanisms with the hope of informing new treatments. Genome-wide association studies (GWAS) identified nearly 270 loci with a high statistical association with schizophrenia risk; however each locus confers only a small increase in risk therefore it is difficult to translate these findings into understanding disease biology that can lead to treatments. Induced pluripotent stem cell (iPSC) models are a tractable system to translate genetic findings and interrogate mechanisms of pathogenesis. Mounting research with patient-derived iPSCs has proposed several neurodevelopmental pathways altered in SCZ, such as neural progenitor cell (NPC) proliferation, imbalanced differentiation of excitatory and inhibitory cortical neurons. However, it is unclear what exactly these iPS models recapitulate, how potential perturbations of early brain development translates into illness in adults and how iPS models that represent fetal stages can be utilized to further drug development efforts to treat adult illness. I will present the largest transcriptome analysis of post-mortem caudate nucleus in schizophrenia where we discovered that decreased presynaptic DRD2 autoregulation is the causal dopamine risk factor for schizophrenia (Benjamin et al, Nature Neuroscience 2022 https://doi.org/10.1038/s41593-022-01182-7). We developed stem cell models from a subset of the postmortem cohort to better understand the molecular underpinnings of human psychiatric disorders (Sawada et al, Stem Cell Research 2020). We established a method for the differentiation of iPS cells into ventral forebrain organoids and performed single cell RNAseq and cellular phenotyping. To our knowledge, this is the first study to evaluate iPSC models of SZ from the same individuals with postmortem tissue. Our study establishes that striatal neurons in the patients with SCZ carry abnormalities that originated during early brain development. Differentiation of inhibitory neurons is accelerated whereas excitatory neuronal development is delayed, implicating an excitation and inhibition (E-I) imbalance during early brain development in SCZ. We found a significant overlap of genes upregulated in the inhibitory neurons in SCZ organoids with upregulated genes in postmortem caudate tissues from patients with SCZ compared with control individuals, including the donors of our iPS cell cohort. Altogether, we demonstrate that ventral forebrain organoids derived from postmortem tissue of individuals with schizophrenia recapitulate perturbed striatal gene expression dynamics of the donors’ brains (Sawada et al, biorxiv 2022 https://doi.org/10.1101/2022.05.26.493589).
The speaker identification ability of blind and sighted listeners
Previous studies have shown that blind individuals outperform sighted controls in a variety of auditory tasks; however, only few studies have investigated blind listeners’ speaker identification abilities. In addition, existing studies in the area show conflicting results. The presented empirical investigation with 153 blind (74 of them congenitally blind) and 153 sighted listeners is the first of its kind and scale in which long-term memory effects of blind listeners’ speaker identification abilities are examined. For the empirical investigation, all listeners were evenly assigned to one of nine subgroups (3 x 3 design) in order to investigate the influence of two parameters with three levels, respectively, on blind and sighted listeners’ speaker identification performance. The parameters were a) time interval; i.e. a time interval of 1, 3 or 6 weeks between the first exposure to the voice to be recognised (familiarisation) and the speaker identification task (voice lineup); and b) signal quality; i.e. voice recordings were presented in either studio-quality, mobile phone-quality or as recordings of whispered speech. Half of the presented voice lineups were target-present lineups in which the previously heard target voice was included. The other half consisted of target-absent lineups which contained solely distractor voices. Blind individuals outperformed sighted listeners only under studio quality conditions. Furthermore, for blind and sighted listeners no significant performance differences were found with regard to the three investigated time intervals of 1, 3 and 6 weeks. Blind as well as sighted listeners were significantly better at picking the target voice from target-present lineups than at indicating that the target voice was absent in target-absent lineups. Within the blind group, no significant correlations were found between identification performance and onset or duration of blindness. Implications for the field of forensic phonetics are discussed.
Orientation selectivity in rodent V1: theory vs experiments
Neurons in the primary visual cortex (V1) of rodents are selective to the orientation of the stimulus, as in other mammals such as cats and monkeys. However, in contrast with those species, their neurons display a very different type of spatial organization. Instead of orientation maps they are organized in a “salt and pepper” pattern, where adjacent neurons have completely different preferred orientations. This structure has motivated both experimental and theoretical research with the objective of determining which aspects of the connectivity patterns and intrinsic neuronal responses can explain the observed behavior. These analysis have to take into account also that the neurons of the thalamus that send their outputs to the cortex have more complex responses in rodents than in higher mammals, displaying, for instance, a significant degree of orientation selectivity. In this talk we present work showing that a random feed-forward connectivity pattern, in which the probability of having a connection between a cortical neuron and a thalamic neuron depends only on the relative distance between them is enough explain several aspects of the complex phenomenology found in these systems. Moreover, this approach allows us to evaluate analytically the statistical structure of the thalamic input on the cortex. We find that V1 neurons are orientation selective but the preferred orientation of the stimulus depends on the spatial frequency of the stimulus. We disentangle the effect of the non circular thalamic receptive fields, finding that they control the selectivity of the time-averaged thalamic input, but not the selectivity of the time locked component. We also compare with experiments that use reverse correlation techniques, showing that ON and OFF components of the aggregate thalamic input are spatially segregated in the cortex.
Programmed axon death: from animal models into human disease
Programmed axon death is a widespread and completely preventable mechanism in injury and disease. Mouse and Drosophila studies define a molecular pathway involving activation of SARM1 NA Dase and its prevention by NAD synthesising enzyme NMNAT2 . Loss of axonal NMNAT2 causes its substrate, NMN , to accumulate and activate SARM1 , driving loss of NAD and changes in ATP , ROS and calcium. Animal models caused by genetic mutation, toxins, viruses or metabolic defects can be alleviated by blocking programmed axon death, for example models of CMT1B , chemotherapy-induced peripheral neuropathy (CIPN), rabies and diabetic peripheral neuropathy (DPN). The perinatal lethality of NMNAT2 null mice is completely rescued, restoring a normal, healthy lifespan. Animal models lack the genetic and environmental diversity present in human populations and this is problematic for modelling gene-environment combinations, for example in CIPN and DPN , and identifying rare, pathogenic mutations. Instead, by testing human gene variants in WGS datasets for loss- and gain-of-function, we identified enrichment of rare SARM1 gain-of-function variants in sporadic ALS , despite previous negative findings in SOD1 transgenic mice. We have shown in mice that heterozygous SARM1 loss-of-function is protective from a range of axonal stresses and that naturally-occurring SARM1 loss-of-function alleles are present in human populations. This enables new approaches to identify disorders where blocking SARM1 may be therapeutically useful, and the existence of two dominant negative human variants in healthy adults is some of the best evidence available that drugs blocking SARM1 are likely to be safe. Further loss- and gain-of-function variants in SARM1 and NMNAT2 are being identified and used to extend and strengthen the evidence of association with neurological disorders. We aim to identify diseases, and specific patients, in whom SARM1 -blocking drugs are most likely to be effective.
Dynamics of cortical circuits: underlying mechanisms and computational implications
A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.
Meningeal macrophages protect against viral neuroinfection
https://doi.org/10.1016/j.immuni.2022.10.005
Humoral immunity at the brain borders in homeostasis and a scRNA-seq atlas of immune cells at the CNS borders
https://www.cnsbordercellatlas.org/
Microglia states and nomenclature: A field at its crossroads
https://doi.org/10.1016/j.neuron.2022.10.020
Microglial efferocytosis: Diving into the Alzheimer's Disease gene pool
Genome-wide association studies and functional genomics studies have linked specific cell types, genes, and pathways to Alzheimer’s disease (AD) risk. In particular, AD risk alleles primarily affect the abundance or structure, and thus the activity, of genes expressed in macrophages, strongly implicating microglia (the brain-resident macrophages) in the etiology of AD. These genes converge on pathways (endocytosis/phagocytosis, cholesterol metabolism, and immune response) with critical roles in core macrophage functions such as efferocytosis. Here, we review these pathways, highlighting relevant genes identified in the latest AD genetics and genomics studies, and describe how they may contribute to AD pathogenesis. Investigating the functional impact of AD-associated variants and genes in microglia is essential for elucidating disease risk mechanisms and developing effective therapeutic approaches." https://doi.org/10.1016/j.neuron.2022.10.015
Protective microglial signaling in Alzheimer's Disease
Recent studies have begun to reveal critical roles for the brain’s professional phagocytes, microglia, and their receptors in the control of neurotoxic amyloid beta (Aβ) and myelin debris accumulation in neurodegenerative disease. However, the critical intracellular molecules that orchestrate neuroprotective functions of microglia remain poorly understood. In our studies, we find that targeted deletion of SYK in microglia leads to exacerbated Aβ deposition, aggravated neuropathology, and cognitive defects in the 5xFAD mouse model of Alzheimer’s disease (AD). Disruption of SYK signaling in this AD model was further shown to impede the development of disease-associated microglia (DAM), alter AKT/GSK3β-signaling, and restrict Aβ phagocytosis by microglia. Conversely, receptor-mediated activation of SYK limits Aβ load. We also found that SYK critically regulates microglial phagocytosis and DAM acquisition in demyelinating disease. Collectively, these results broaden our understanding of the key innate immune signaling molecules that instruct beneficial microglial functions in response to neurotoxic material." https://doi.org/10.1016/j.cell.2022.09.030
Cholesterol and matrisome pathways dysregulated in Alzheimer’s disease brain astrocytes and microglia
The impact of apolipoprotein E ε4 (APOE4), the strongest genetic risk factor for Alzheimer’s disease (AD), on human brain cellular function remains unclear. Here, we investigated the effects of APOE4 on brain cell types derived from population and isogenic human induced pluripotent stem cells, post-mortem brain, and APOE targeted replacement mice. Population and isogenic models demonstrate that APOE4 local haplotype, rather than a single risk allele, contributes to risk. Global transcriptomic analyses reveal human-specific, APOE4-driven lipid metabolic dysregulation in astrocytes and microglia. APOE4 enhances de novo cholesterol synthesis despite elevated intracellular cholesterol due to lysosomal cholesterol sequestration in astrocytes. Further, matrisome dysregulation is associated with upregulated chemotaxis, glial activation, and lipid biosynthesis in astrocytes co-cultured with neurons, which recapitulates altered astrocyte matrisome signaling in human brain. Thus, APOE4 initiates glia-specific cell and non-cell autonomous dysregulation that may contribute to increased AD risk." https://doi.org/10.1016/j.cell.2022.05.017
Convex neural codes in recurrent networks and sensory systems
Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
Versatile treadmill system for measuring locomotion and neural activity in head-fixed mice
Here, we present a protocol for using a versatile treadmill system to measure locomotion and neural activity at high temporal resolution in head-fixed mice. We first describe the assembly of the treadmill system. We then detail surgical implantation of the headplate on the mouse skull, followed by habituation of mice to locomotion on the treadmill system. The system is compact, movable, and simple to synchronize with other data streams, making it ideal for monitoring brain activity in diverse behavioral frameworks. https://dx.doi.org/10.1016/j.xpro.2022.101701
Protocols for the social transfer of pain and analgesia in mice
We provide protocols for the social transfer of pain and analgesia in mice. We describe the steps to induce pain or analgesia (pain relief) in bystander mice with a 1-h social interaction with a partner injected with CFA (complete Freund’s adjuvant) or CFA and morphine, respectively. We detail behavioral tests to assess pain or analgesia in the untreated bystander mice. This protocol has been validated in mice and rats and can be used for investigating mechanisms of empathy. Highlights • A protocol for the rapid social transfer of pain in rodents • Detailed requirements for handling and housing conditions • Procedures for habituation, social interaction, and pain induction and assessment • Adaptable for social transfer of analgesia and may be used to study empathy in rodents https://doi.org/10.1016/j.xpro.2022.101756
The future of neuropsychology will be open, transdiagnostic, and FAIR - why it matters and how we can get there
Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles (Wilkinson et al., 2016), standards (Gorgolewski et al., 2016), and ontologies (Poldrack et al., 2011), as well as practices of meta-analysis (Yarkoni et al., 2011; Dockès et al., 2020) and data sharing (Gorgolewski et al., 2015). However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed (Weber et al., 2010; Siddiqiet al 2022). On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain (Price, 2018). Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce (e.g., Larivière et al., 2021). We are working with clinicians, neuroimagers, and software developers to develop an open source platform for the storage, sharing, synthesis and meta-analysis of human clinical data to the service of the clinical and cognitive neuroscience community so that the future of neuropsychology can be transdiagnostic, open, and FAIR. We call it neurocausal (https://neurocausal.github.io).
Training Dynamic Spiking Neural Network via Forward Propagation Through Time
With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.
Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity
Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network (SNN) architecture that is enriched by Hebbian synaptic plasticity. We experimentally show that our memory-equipped SNN model outperforms state-of-the-art deep learning mechanisms in a sequential pattern-memorization task, as well as demonstrate superior out-of-distribution generalization capabilities compared to these models. We further show that our model can be successfully applied to one-shot learning and classification of handwritten characters, improving over the state-of-the-art SNN model. We also demonstrate the capability of our model to learn associations for audio to image synthesis from spoken and handwritten digits. Our SNN model further presents a novel solution to a variety of cognitive question answering tasks from a standard benchmark, achieving comparable performance to both memory-augmented ANN and SNN-based state-of-the-art solutions to this problem. Finally we demonstrate that our model is able to learn from rewards on an episodic reinforcement learning task and attain near-optimal strategy on a memory-based card game. Hence, our results show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. Since local Hebbian plasticity can easily be implemented in neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.
The role of population structure in computations through neural dynamics
Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.
Inter-tissue signals modify food-seeking behavior in C. elegans
Animals modify their behavioral outputs in response to changes in external and internal environments. We use the nematode, C. elegans to probe the pathways linking changes in internal states like hunger with behavior. We find that acute food deprivation alters the localization of two transcription factors, likely releasing an insulin-like peptide from the intestine, which in turn modifies chemosensory neurons and alters behavior. These results present a model for how inter-tissue signals to generate flexible behaviors via gut-brain signaling.
Designing the BEARS (Both Ears) Virtual Reality Training Package to Improve Spatial Hearing in Young People with Bilateral Cochlear Implant
Results: the main areas which were modified based on participatory feedback were the variety of immersive scenarios to cover a range of ages and interests, the number of levels of complexity to ensure small improvements were measured, the feedback and reward schemes to ensure positive reinforcement, and specific provision for participants with balance issues, who had difficulties when using head-mounted displays. The effectiveness of the finalised BEARS suite will be evaluated in a large-scale clinical trial. We have added in additional login options for other members of the family and based on patient feedback we have improved the accompanying reward schemes. Conclusions: Through participatory design we have developed a training package (BEARS) for young people with bilateral cochlear implants. The training games are appropriate for use by the study population and ultimately should lead to patients taking control of their own management and reducing the reliance upon outpatient-based rehabilitation programmes. Virtual reality training provides a more relevant and engaging approach to rehabilitation for young people.
From Machine Learning to Autonomous Intelligence
How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf
Chandelier cells shine a light on the emergence of GABAergic circuits in the cortex
GABAergic interneurons are chiefly responsible for controlling the activity of local circuits in the cortex. Chandelier cells (ChCs) are a type of GABAergic interneuron that control the output of hundreds of neighbouring pyramidal cells through axo-axonic synapses which target the axon initial segment (AIS). Despite their importance in modulating circuit activity, our knowledge of the development and function of axo-axonic synapses remains elusive. We have investigated the emergence and plasticity of axo-axonic synapses in layer 2/3 of the somatosensory cortex (S1) and found that ChCs follow what appear to be homeostatic rules when forming synapses with pyramidal neurons. We are currently implementing in vivo techniques to image the process of axo-axonic synapse formation during development and uncover the dynamics of synaptogenesis and pruning at the AIS. In addition, we are using an all-optical approach to both activate and measure the activity of chandelier cells and their postsynaptic partners in the primary visual cortex (V1) and somatosensory cortex (S1) in mice, also during development. We aim to provide a structural and functional description of the emergence and plasticity of a GABAergic synapse type in the cortex.
A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens
We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119
A model of colour appearance based on efficient coding of natural images
An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
The 15th David Smith Lecture in Anatomical Neuropharmacology: Professor Tim Bliss, "Memories of long term potentiation
The David Smith Lectures in Anatomical Neuropharmacology, Part of the 'Pharmacology, Anatomical Neuropharmacology and Drug Discovery Seminars Series', Department of Pharmacology, University of Oxford. The 15th David Smith Award Lecture in Anatomical Neuropharmacology will be delivered by Professor Tim Bliss, Visiting Professor at UCL and the Frontier Institutes of Science and Technology, Xi’an Jiaotong University, China, and is hosted by Professor Nigel Emptage. This award lecture was set up to celebrate the vision of Professor A David Smith, namely, that explanations of the action of drugs on the brain requires the definition of neuronal circuits, the location and interactions of molecules. Tim Bliss gained his PhD at McGill University in Canada. He joined the MRC National Institute for Medical Research in Mill Hill, London in 1967, where he remained throughout his career. His work with Terje Lømo in the late 1960’s established the phenomenon of long-term potentiation (LTP) as the dominant synaptic model of how the mammalian brain stores memories. He was elected as a Fellow of the Royal Society in 1994 and is a founding fellow of the Academy of Medical Sciences. He shared the Bristol Myers Squibb award for Neuroscience with Eric Kandel in 1991, the Ipsen Prize for Neural Plasticity with Richard Morris and Yadin Dudai in 2013. In May 2012 he gave the annual Croonian Lecture at the Royal Society on ‘The Mechanics of Memory’. In 2016 Tim, with Graham Collingridge and Richard Morris shared the Brain Prize, one of the world's most coveted science prizes. Abstract: In 1966 there appeared in Acta Physiologica Scandinavica an abstract of a talk given by Terje Lømo, a PhD student in Per Andersen’s laboratory at the University of Oslo. In it Lømo described the long-lasting potentiation of synaptic responses in the dentate gyrus of the anaesthetised rabbit that followed repeated episodes of 10-20Hz stimulation of the perforant path. Thus, heralded and almost entirely unnoticed, one of the most consequential discoveries of 20th century neuroscience was ushered into the world. Two years later I arrived in Oslo as a visiting post-doc from the National Institute for Medical Research in Mill Hill, London. In this talk I recall the events that led us to embark on a systematic reinvestigation of the phenomenon now known as long-term potentiation (LTP) and will then go on to describe the discoveries and controversies that enlivened the early decades of research into synaptic plasticity in the mammalian brain. I will end with an observer’s view of the current state of research in the field, and what we might expect from it in the future.
The evolution of computation in the brain: Insights from studying the retina
The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?
Heterogeneity and non-random connectivity in reservoir computing
Reservoir computing is a promising framework to study cortical computation, as it is based on continuous, online processing and the requirements and operating principles are compatible with cortical circuit dynamics. However, the framework has issues that limit its scope as a generic model for cortical processing. The most obvious of these is that, in traditional models, learning is restricted to the output projections and takes place in a fully supervised manner. If such an output layer is interpreted at face value as downstream computation, this is biologically questionable. If it is interpreted merely as a demonstration that the network can accurately represent the information, this immediately raises the question of what would be biologically plausible mechanisms for transmitting the information represented by a reservoir and incorporating it in downstream computations. Another major issue is that we have as yet only modest insight into how the structural and dynamical features of a network influence its computational capacity, which is necessary not only for gaining an understanding of those features in biological brains, but also for exploiting reservoir computing as a neuromorphic application. In this talk, I will first demonstrate a method for quantifying the representational capacity of reservoirs without training them on tasks. Based on this technique, which allows systematic comparison of systems, I then present our recent work towards understanding the roles of heterogeneity and connectivity patterns in enhancing both the computational properties of a network and its ability to reliably transmit to downstream networks. Finally, I will give a brief taster of our current efforts to apply the reservoir computing framework to magnetic systems as an approach to neuromorphic computing.
Correcting cortical output: a distributed learning framework for motor adaptation
Bernstein Conference 2024
Electrogenic Na+/K+-ATPases constrain excitable cell activity and pose additional evolutionary pressure
Bernstein Conference 2024
Neuronal bursting from an interplay of fast voltage and slow concentration dynamics mediated by the Na+/K+-ATPase
Bernstein Conference 2024
Functional inter-subject alignment of MEG data outperforms anatomical alignment
Bernstein Conference 2024
A homeostatic mechanism or statistics can maintain input-output relations of multilayer drifting assemblies
Bernstein Conference 2024
A latent model of calcium activity outperforms alternatives at removing behavioral artifacts in two-channel calcium imaging
COSYNE 2022
A latent model of calcium activity outperforms alternatives at removing behavioral artifacts in two-channel calcium imaging
COSYNE 2022
Reduction of entropy specific to cortical outputs during anesthetic-induced loss of consciousness
COSYNE 2022
Reduction of entropy specific to cortical outputs during anesthetic-induced loss of consciousness
COSYNE 2022
Synaptic modulation outperforms somatic modulation for rapid adaptation in cortical nets
COSYNE 2025
VAME outperforms conventional assessment of behavioral changes and treatment efficacy in Alzheimer’s mouse models
COSYNE 2025
The ability of ellagic acid loaded in nanovesicles to rescue the Aβ-induced LTP impairment
FENS Forum 2024
Activation of L2/3 neurons in the primary somatosensory cortex during motor output
FENS Forum 2024
Altered cell membrane ganglioside composition affects enzyme activity, expression, and submembrane localization of Na+,K+-ATPase in mouse brain
FENS Forum 2024
ATP6V1A is required for synaptic rearrangement and plasticity in murine hippocampal neurons
FENS Forum 2024
ATP8A2 controls phosphatidylserine externalisation, structural integrity, and survival in neurons
FENS Forum 2024
Cellular and molecular footprint of aging in a defined neuronal network encoding associative memory
FENS Forum 2024
Cerebellar neurodegeneration in phospholipid flippases ATP8A1/ATP8A2 double knock-out mice can be ameliorated by inactivating a microglial PS receptor
FENS Forum 2024
Changes in neurotransmitter ATP/adenosine dynamics in the pathogenesis of metabolic liver diseases
FENS Forum 2024
Clinical and molecular characterization of ATP1A1-related Charcot-Marie-Tooth disease
FENS Forum 2024
Differences in the frequency-dependency of LTP and LTD at lateral and medial perforant path synapses in rodent dentate gyrus reflect distinct roles in information encoding
FENS Forum 2024
Differential metabolism of serine enantiomers in the striatum of MPTP-lesioned monkeys and mice correlates with the severity of dopaminergic midbrain degeneration
FENS Forum 2024
Dopamine increases the protein synthesis rate in the hippocampus enabling dopamine-dependent LTP
FENS Forum 2024
The effect of altered ganglioside composition on leptin receptor and Na⁺,K⁺-ATPase in mouse thalamus
FENS Forum 2024
LTP at excitatory synapses onto inhibitory interneurons in the hippocampus depends on AMPA receptor surface mobility
FENS Forum 2024
Experience-dependent modulation of sensory inputs in the postpartum hypothalamus for infant-directed motor actions
FENS Forum 2024
Exploring the neuroprotective effects of nicotine against MPTP-induced neuronal damage in mice: Insights into antioxidant system
FENS Forum 2024
LTP and IEG expression at hippocampal synapses are not induced by cell-autonomous cAMP signalling
FENS Forum 2024
Fragile-X-messenger ribonucleoprotein mediates BDNF-induced upregulation of GluN2B-containing NMDA receptors: Role in LTP of CA1 synapses
FENS Forum 2024
From systems biology to drug targets: ATP synthase subunit upregulation causes mitochondrial dysfunction in Shank3Δ4-22 mouse model of autism
FENS Forum 2024
GABA attenuates MPTP-induced Parkinson's disease by targeting the MAPK signaling pathway
FENS Forum 2024
Histaminergic circadian modulation of mouse retinal output in vivo
FENS Forum 2024
The impact of gestational stress and antibiotics on postpartum maternal behavior
FENS Forum 2024
Input and output connectivity matrix of Chx10-PPN neurons
FENS Forum 2024
Investigating the acute impact of sweeteners sucralose and Ace-K on ATP production and mitochondrial respiration in the hypothalamic GT1-7 cell line challenged with increased glucose
FENS Forum 2024
Investigating input-output computations of Purkinje neuron dendrites in vivo
FENS Forum 2024
Involvement of peptidergic Edinger-Westphal nucleus in the neurobiology of migraine
FENS Forum 2024
Long-term potentiation (LTP) requires astrocytes and D-serine at entorhinal cortex LIII – CA1 synapses
FENS Forum 2024
Mathematical modelling of ATP-induced Ca2+ transients in Deiters cells considering the tonotopic axis
FENS Forum 2024
Micro-circuitry and output connectivity of the striosome network
FENS Forum 2024