Divergence
divergence
Astrocytes: From Metabolism to Cognition
Different brain cell types exhibit distinct metabolic signatures that link energy economy to cellular function. Astrocytes and neurons, for instance, diverge dramatically in their reliance on glycolysis versus oxidative phosphorylation, underscoring that metabolic fuel efficiency is not uniform across cell types. A key factor shaping this divergence is the structural organization of the mitochondrial respiratory chain into supercomplexes. Specifically, complexes I (CI) and III (CIII) form a CI–CIII supercomplex, but the degree of this assembly varies by cell type. In neurons, CI is predominantly integrated into supercomplexes, resulting in highly efficient mitochondrial respiration and minimal reactive oxygen species (ROS) generation. Conversely, in astrocytes, a larger fraction of CI remains unassembled, freely existing apart from CIII, leading to reduced respiratory efficiency and elevated mitochondrial ROS production. Despite this apparent inefficiency, astrocytes boast a highly adaptable metabolism capable of responding to diverse stressors. Their looser CI–CIII organization allows for flexible ROS signaling, which activates antioxidant programs via transcription factors like Nrf2. This modular architecture enables astrocytes not only to balance energy production but also to support neuronal health and influence complex organismal behaviors.
Enhancing Qualitative Coding with Large Language Models: Potential and Challenges
Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.
Use of brain imaging data to improve prescriptions of psychotropic drugs - Examples of ketamine in depression and antipsychotics in schizophrenia
The use of molecular imaging, particularly PET and SPECT, has significantly transformed the treatment of schizophrenia with antipsychotic drugs since the late 1980s. It has offered insights into the links between drug target engagement, clinical effects, and side effects. A therapeutic window for receptor occupancy is established for antipsychotics, yet there is a divergence of opinions regarding the importance of blood levels, with many downplaying their significance. As a result, the role of therapeutic drug monitoring (TDM) as a personalized therapy tool is often underrated. Since molecular imaging of antipsychotics has focused almost entirely on D2-like dopamine receptors and their potential to control positive symptoms, negative symptoms and cognitive deficits are hardly or not at all investigated. Alternative methods have been introduced, i.e. to investigate the correlation between approximated receptor occupancies from blood levels and cognitive measures. Within the domain of antidepressants, and specifically regarding ketamine's efficacy in depression treatment, there is limited comprehension of the association between plasma concentrations and target engagement. The measurement of AMPA receptors in the human brain has added a new level of comprehension regarding ketamine's antidepressant effects. To ensure precise prescription of psychotropic drugs, it is vital to have a nuanced understanding of how molecular and clinical effects interact. Clinician scientists are assigned with the task of integrating these indispensable pharmacological insights into practice, thereby ensuring a rational and effective approach to the treatment of mental health disorders, signaling a new era of personalized drug therapy mechanisms that promote neuronal plasticity not only under pathological conditions, but also in the healthy aging brain.
The smart image compression algorithm in the retina: a theoretical study of recoding inputs in neural circuits
Computation in neural circuits relies on a common set of motifs, including divergence of common inputs to parallel pathways, convergence of multiple inputs to a single neuron, and nonlinearities that select some signals over others. Convergence and circuit nonlinearities, considered individually, can lead to a loss of information about the inputs. Past work has detailed how to optimize nonlinearities and circuit weights to maximize information, but we show that selective nonlinearities, acting together with divergent and convergent circuit structure, can improve information transmission over a purely linear circuit despite the suboptimality of these components individually. These nonlinearities recode the inputs in a manner that preserves the variance among converged inputs. Our results suggest that neural circuits may be doing better than expected without finely tuned weights.
Functional Divergence at the Mouse Bipolar Cell Terminal
Research in our lab focuses on the circuit mechanisms underlying sensory computation. We use the mouse retina as a model system because it allows us to stimulate the circuit precisely with its natural input, patterns of light, and record its natural output, the spike trains of retinal ganglion cells. We harness the power of genetic manipulations and detailed information about cell types to uncover new circuits and discover their role in visual processing. Our methods include electrophysiology, computational modeling, and circuit tracing using a variety of imaging techniques.
Becoming what you smell: adaptive sensing in the olfactory system
I will argue that the circuit architecture of the early olfactory system provides an adaptive, efficient mechanism for compressing the vast space of odor mixtures into the responses of a small number of sensors. In this view, the olfactory sensory repertoire employs a disordered code to compress a high dimensional olfactory space into a low dimensional receptor response space while preserving distance relations between odors. The resulting representation is dynamically adapted to efficiently encode the changing environment of volatile molecules. I will show that this adaptive combinatorial code can be efficiently decoded by systematically eliminating candidate odorants that bind to silent receptors. The resulting algorithm for 'estimation by elimination' can be implemented by a neural network that is remarkably similar to the early olfactory pathway in the brain. Finally, I will discuss how diffuse feedback from the central brain to the bulb, followed by unstructured projections back to the cortex, can produce the convergence and divergence of the cortical representation of odors presented in shared or different contexts. Our theory predicts a relation between the diversity of olfactory receptors and the sparsity of their responses that matches animals from flies to humans. It also predicts specific deficits in olfactory behavior that should result from optogenetic manipulation of the olfactory bulb and cortex, and in some disease states.
Brain region evolution by duplication-and-divergence -- Lessons from the cerebellar nuclei
How Brain Circuits Function in Health and Disease: Understanding Brain-wide Current Flow
Dr. Rajan and her lab design neural network models based on experimental data, and reverse-engineer them to figure out how brain circuits function in health and disease. They recently developed a powerful framework for tracing neural paths across multiple brain regions— called Current-Based Decomposition (CURBD). This new approach enables the computation of excitatory and inhibitory input currents that drive a given neuron, aiding in the discovery of how entire populations of neurons behave across multiple interacting brain regions. Dr. Rajan’s team has applied this method to studying the neural underpinnings of behavior. As an example, when CURBD was applied to data gathered from an animal model often used to study depression- and anxiety-like behaviors (i.e., learned helplessness) the underlying biology driving adaptive and maladaptive behaviors in the face of stress was revealed. With this framework Dr. Rajan's team probes for mechanisms at work across brain regions that support both healthy and disease states-- as well as identify key divergences from multiple different nervous systems, including zebrafish, mice, non-human primates, and humans.
Untangling brain wide current flow using neural network models
Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this powerful framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.
Inferring brain-wide current flow using data-constrained neural network models
Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we can ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.
How brain evolutionary mechanisms could inspire AI structural designs
Across evolution and, in particular, in brain evolutionary development we can observe how diverse adaptive biological mechanisms are displayed as a solution to environmental demands. In this talk, I will discuss some examples of emerging evolutionary developmental strategies allowing to increase brain computational capacities and how neurodevelopmental conservation, divergence, and convergence would inspire AI systems optimization.
Autism spectrum disorder: from gene discovery to functional insights
Autism spectrum disorder (ASD) is a neurodevelopmental disorder affecting up to 1% of the population. Over the past few years, large-scale genomic studies have identified hundreds of genetic loci associated with liability to ASD. It is now time to translate these genetic discoveries into functional studies that can help us understand convergences and divergences across risk genes, and build pre-clinical cell and animal models. In this seminar, I will discuss some of the most recent findings on the genetic risk architecture of ASD. I will then expand on our work on biomarkers discovery and neurodevelopmental analyses in two rare genetic conditions associated with ASD: ADNP and DDX3X syndrome.
Divergence of chromatic information in GABAergic amacrine cells in the retina
COSYNE 2022
Effort-based decision-making versus spontaneous foraging tasks reveal divergence in antidepressant effects on motivation in mice
FENS Forum 2024