Mapping
mapping
Chris Reinke
The internship aims to develop a controller for a social mobile robot to have a conversation with people using large language models (LLMs) such as ChatGPT. The internship is part of the European SPRING project, which aims to develop mobile robots for healthcare environments. The intern will develop a controller (Python, ROS) for ARI, the social robot. The controller will navigate towards a human (or group), have a conversation with them, and leave the conversation. The intern will use existing components from the SPRING project such as mapping and localization of the robot and humans, human-aware navigation, speech recognition, and a simple dialogue system based on ChatGPT. The intern will also investigate how to optimally use LLMs such as ChatGPT for natural and comfortable conversation with the robot, for example, by using prompt engineering. The intern will have the chance to develop and implement their own ideas to improve the conversation with the robot, for example, by investigating gaze, gestures, or emotions.
Non-invasive human neuroimaging studies of motor plasticity have predominantly focused on the cerebral cortex due to low signal-to-noise ration of blood oxygen level-dependent (BOLD) signals in subcortical structures and the small effect sizes typically observed in plasticity paradigms. Precision functional mapping can help overcome these challenges and has revealed significant and reversible functional alterations in the cortico-subcortical motor circuit during arm immobilization
Harnessing Big Data in Neuroscience: From Mapping Brain Connectivity to Predicting Traumatic Brain Injury
Neuroscience is experiencing unprecedented growth in dataset size both within individual brains and across populations. Large-scale, multimodal datasets are transforming our understanding of brain structure and function, creating opportunities to address previously unexplored questions. However, managing this increasing data volume requires new training and technology approaches. Modern data technologies are reshaping neuroscience by enabling researchers to tackle complex questions within a Ph.D. or postdoctoral timeframe. I will discuss cloud-based platforms such as brainlife.io, that provide scalable, reproducible, and accessible computational infrastructure. Modern data technology can democratize neuroscience, accelerate discovery and foster scientific transparency and collaboration. Concrete examples will illustrate how these technologies can be applied to mapping brain connectivity, studying human learning and development, and developing predictive models for traumatic brain injury (TBI). By integrating cloud computing and scalable data-sharing frameworks, neuroscience can become more impactful, inclusive, and data-driven..
Circuit Mechanisms of Remote Memory
Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
Mapping the neural dynamics of dominance and defeat
Social experiences can have lasting changes on behavior and affective state. In particular, repeated wins and losses during fighting can facilitate and suppress future aggressive behavior, leading to persistent high aggression or low aggression states. We use a combination of techniques for multi-region neural recording, perturbation, behavioral analysis, and modeling to understand how nodes in the brain’s subcortical “social decision-making network” encode and transform aggressive motivation into action, and how these circuits change following social experience.
“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”
Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.
Mapping the Brain‘s Visual Representations Using Deep Learning
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
How are the epileptogenesis clocks ticking?
The epileptogenesis process is associated with large-scale changes in gene expression, which contribute to the remodelling of brain networks permanently altering excitability. About 80% of the protein coding genes are under the influence of the circadian rhythms. These are 24-hour endogenous rhythms that determine a large number of daily changes in physiology and behavior in our bodies. In the brain, the master clock regulates a large number of pathways that are important during epileptogenesis and established-epilepsy, such as neurotransmission, synaptic homeostasis, inflammation, blood-brain barrier among others. In-depth mapping of the molecular basis of circadian timing in the brain is key for a complete understanding of the cellular and molecular events connecting genes to phenotypes.
Are integrative, multidisciplinary, and pragmatic models possible? The #PsychMapping experience
This presentation delves into the necessity for simplified models in the field of psychological sciences to cater to a diverse audience of practitioners. We introduce the #PsychMapping model, evaluate its merits and limitations, and discuss its place in contemporary scientific culture. The #PsychMapping model is the product of an extensive literature review, initially within the realm of sport and exercise psychology and subsequently encompassing a broader spectrum of psychological sciences. This model synthesizes the progress made in psychological sciences by categorizing variables into a framework that distinguishes between traits (e.g., body structure and personality) and states (e.g., heart rate and emotions). Furthermore, it delineates internal traits and states from the externalized self, which encompasses behaviour and performance. All three components—traits, states, and the externalized self—are in a continuous interplay with external physical, social, and circumstantial factors. Two core processes elucidate the interactions among these four primary clusters: external perception, encompassing the mechanism through which external stimuli transition into internal events, and self-regulation, which empowers individuals to become autonomous agents capable of exerting control over themselves and their actions. While the model inherently oversimplifies intricate processes, the central question remains: does its pragmatic utility outweigh its limitations, and can it serve as a valuable tool for comprehending human behaviour?
Visual mechanisms for flexible behavior
Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.
Are integrative, multidisciplinary, and pragmatic models possible? The #PsychMapping experience
This presentation delves into the necessity for simplified models in the field of psychological sciences to cater to a diverse audience of practitioners. We introduce the #PsychMapping model, evaluate its merits and limitations, and discuss its place in contemporary scientific culture. The #PsychMapping model is the product of an extensive literature review, initially within the realm of sport and exercise psychology and subsequently encompassing a broader spectrum of psychological sciences. This model synthesizes the progress made in psychological sciences by categorizing variables into a framework that distinguishes between traits (e.g., body structure and personality) and states (e.g., heart rate and emotions). Furthermore, it delineates internal traits and states from the externalized self, which encompasses behaviour and performance. All three components—traits, states, and the externalized self—are in a continuous interplay with external physical, social, and circumstantial factors. Two core processes elucidate the interactions among these four primary clusters: external perception, encompassing the mechanism through which external stimuli transition into internal events, and self-regulation, which empowers individuals to become autonomous agents capable of exerting control over themselves and their actions. While the model inherently oversimplifies intricate processes, the central question remains: does its pragmatic utility outweigh its limitations, and can it serve as a valuable tool for comprehending human behaviour?
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss a recent paper by Taylor et al. (2023): https://www.sciencedirect.com/science/article/pii/S1053811923002896. They discuss the merits of highlighting results instead of hiding them; that is, clearly marking which voxels and clusters pass a given significance threshold, but still highlighting sub-threshold results, with opacity proportional to the strength of the effect. They use this to illustrate how there in fact may be more agreement between researchers than previously thought, using the NARPS dataset as an example. By adopting a continuous, "highlighted" approach, it becomes clear that the majority of effects are in the same location and that the effect size is in the same direction, compared to an approach that only permits rejecting or not rejecting the null hypothesis. We will also talk about the implications of this approach for creating figures, detecting artifacts, and aiding reproducibility.
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss this paper on Neuroquery, a relatively new web-based meta-analysis tool: https://elifesciences.org/articles/53385.pdf. This is different from Neurosynth in that it generates meta-analysis maps using predictive modeling from the string of text provided at the prompt, instead of performing inferential statistics to calculate the overlap of activation from different studies. This allows the user to generate predictive maps for more nuanced cognitive processes - especially for clinical populations which may be underrepresented in the literature compared to controls - and can be useful in generating predictions about where the activity will be for one's own study, and for creating ROIs.
The role of sub-population structure in computations through neural dynamics
Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.
Euclidean coordinates are the wrong prior for primate vision
The mapping from the visual field to V1 can be approximated by a log-polar transform. In this domain, scale is a left-right shift, and rotation is an up-down shift. When fed into a standard shift-invariant convolutional network, this provides scale and rotation invariance. However, translation invariance is lost. In our model, this is compensated for by multiple fixations on an object. Due to the high concentration of cones in the fovea with the dropoff of resolution in the periphery, fully 10 degrees of visual angle take up about half of V1, with the remaining 170 degrees (or so) taking up the other half. This layout provides the basis for the central and peripheral pathways. Simulations with this model closely match human performance in scene classification, and competition between the pathways leads to the peripheral pathway being used for this task. Remarkably, in spite of the property of rotation invariance, this model can explain the inverted face effect. We suggest that the standard method of using image coordinates is the wrong prior for models of primate vision.
Learning through the eyes and ears of a child
Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.
Causal Symptom Network Mapping Based on Lesions and Brain Stimulation; Converging Evidence about a Depression Circuit Using Causal Sources of Information
It’s our pleasure to announce that we will host Shan Siddiqi and Michael D. Fox on Thursday, March 30th at noon ET / 6PM CET. Shan Siddiqi, MD, is an Assistant Professor of Psychiatry at Harvard Medical School and the director of Psychiatric Neuromodulation Research at the Brigham and Women’s Hospital. Michael D. Fox, MD, PhD, is an Associate Professor of Neurology at Harvard Medical School and the founding director of the Center for Brain Circuit Therapeutics at the Brigham and Women’s Hospital. The talks will be followed by a shared discussion. You can register via talks.stimulatingbrains.org to receive the (free) Zoom link!
How Children Design by Analogy: The Role of Spatial Thinking
Analogical reasoning is a common reasoning tool for learning and problem-solving. Existing research has extensively studied children’s reasoning when comparing, or choosing from ready-made analogies. Relatively less is known about how children come up with analogies in authentic learning environments. Design education provides a suitable context to investigate how children generate analogies for creative learning purposes. Meanwhile, the frequent use of visual analogies in design provides an additional opportunity to understand the role of spatial reasoning in design-by-analogy. Spatial reasoning is one of the most studied human cognitive factors and is critical to the learning of science, technology, engineering, arts, and mathematics (STEAM). There is growing interest in exploring the interplay between analogical reasoning and spatial reasoning. In this talk, I will share qualitative findings from a case study, where a class of 11-to-12-year-olds in the Netherlands participated in a biomimicry design project. These findings illustrate (1) practical ways to support children’s analogical reasoning in the ideation process and (2) the potential role of spatial reasoning as seen in children mapping form-function relationships in nature analogically and adaptively to those in human designs.
Working memory tasks for functional mapping of the prefrontal cortex in common marmosets
Verb metaphors are processed as analogies
Metaphor is a pervasive phenomenon in language and cognition. To date, the vast majority of psycholinguistic research on metaphor has focused on noun-noun metaphors of the form An X is a Y (e.g., My job is a jail). Yet there is evidence that verb metaphor (e.g., I sailed through my exams) is more common. Despite this, comparatively little work has examined how verb metaphors are processed. In this talk, I will propose a novel account for verb metaphor comprehension: verb metaphors are understood in the same way that analogies are—as comparisons processed via structure-mapping. I will discuss the predictions that arise from applying the analogical framework to verb metaphor and present a series of experiments showing that verb metaphoric extension is consistent with those predictions.
Multimodal Blending
In this talk, I’ll consider how new ideas emerge from old ones via the process of conceptual blending. I’ll start by considering analogical reasoning in problem solving and the role conceptual blending plays in these problem-solving contexts. Then I’ll consider blending in multi-modal contexts, including timelines, memes (viz. image macros), and, if time allows, zoom meetings. I suggest mappings analogy researchers have traditionally considered superficial are often important for the development of novel abstractions. Likewise, the analogue portion of multimodal blends anchors their generative capacity. Overall, these observations underscore the extent to which meaning is a socially distributed process whose intermediate products are stored in cognitive artifacts such as text and digital images.
Mechanisms of relational structure mapping across analogy tasks
Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.
Analogies between exemplars of schema-governed categories
Dominant theories of analogical thinking postulate that making an analogy consists in discovering that two superficially different situations share isomorphic systems of similar relations. According to this perspective, the comparison between the two situations may eventually lead to the construction of a schema, which retains the structural aspects they share and deletes their specific contents. We have developed a new approach to analogical thinking, whose purpose is to explain a particular type of analogies: those in which the analogs are exemplars of a schema-governed category (e.g., two instances of robbery). As compared to standard analogies, these comparisons are noteworthy in that a well-established schema (the schema-governed category) mediates each one of the subprocesses involved in analogical thinking. We argue that the category assignment approach is able to provide a better account of how the analogical subprocesses of retrieval, mapping, re-representation, evaluation and inference generation are carried out during the processing of this specific kind of analogies. The arguments presented are accompanied by brief descriptions of some of the studies that provided support for this approach.
Modelling metaphor comprehension as a form of analogizing
What do people do when they comprehend language in discourse? According to many psychologists, they build and maintain cognitive representations of utterances in four complementary mental models for discourse that interact with each other: the surface text, the text base, the situation model, and the context model. When people encounter metaphors in these utterances, they need to incorporate them into each of these mental representations for the discourse. Since influential metaphor theories define metaphor as a form of (figurative) analogy, involving cross-domain mapping of a smaller or greater extent, the general expectation has been that metaphor comprehension is also based on analogizing. This expectation, however, has been partly borne out by the data, but not completely. There is no one-to-one relationship between metaphor as (conceptual) structure (analogy) and metaphor as (psychological) process (analogizing). According to Deliberate Metaphor Theory (DMT), only some metaphors are handled by analogy. Instead, most metaphors are presumably handled by lexical disambiguation. This is a hypothesis that brings together most metaphor research in a provocatively new way: it means that most metaphors are not processed metaphorically, which produces a paradox of metaphor. In this talk I will sketch out how this paradox arises and how it can be resolved by a new version of DMT, which I have described in my forthcoming book Slowing metaphor down: Updating Deliberate Metaphor Theory (currently under review). In this theory, the distinction between, but also the relation between, analogy in metaphorical structure versus analogy in metaphorical process is of central importance.
Adaptation via innovation in the animal kingdom
Over the course of evolution, the human race has achieved a number of remarkable innovations, that have enabled us to adapt to and benefit from the environment ever more effectively. The ongoing environmental threats and health disasters of our world have now made it crucial to understand the cognitive mechanisms behind innovative behaviours. In my talk, I will present two research projects with examples of innovation-based behavioural adaptation from the taxonomic kingdom of animals, serving as a comparative psychological model for mapping the evolution of innovation. The first project focuses on the challenge of overcoming physical disability. In this study, we investigated an injured kea (Nestor notabilis) that exhibits an efficient, intentional, and innovative tool-use behaviour to compensate his disability, showing evidence for innovation-based adaptation to a physical disability in a non-human species. The second project focuses on the evolution of fire use from a cognitive perspective. Fire has been one of the most dominant ecological forces in human evolution; however, it is still unknown what capabilities and environmental factors could have led to the emergence of fire use. In the core study of this project, we investigated a captive population of Japanese macaques (Macaca fuscata) that has been regularly exposed to campfires during the cold winter months for over 60 years. Our results suggest that macaques are able to take advantage of the positive effects of fire while avoiding the dangers of flames and hot ashes, and exhibit calm behaviour around the bonfire. In addition, I will present a research proposal targeting the foraging behaviour of predatory birds in parts of Australia frequently affected by bushfires. Anecdotal reports suggest that some birds use burning sticks to spread the flames, a behaviour that has not been scientifically observed and evaluated. In summary, the two projects explore innovative behaviours along three different species groups, three different habitats, and three different ecological drivers, providing insights into the cognitive and behavioural mechanisms of adaptation through innovation.
Mapping learning and decision-making algorithms onto brain circuitry
In the first half of my talk, I will discuss our recent work on the midbrain dopamine system. The hypothesis that midbrain dopamine neurons broadcast an error signal for the prediction of reward is among the great successes of computational neuroscience. However, our recent results contradict a core aspect of this theory: that the neurons uniformly convey a scalar, global signal. I will review this work, as well as our new efforts to update models of the neural basis of reinforcement learning with our data. In the second half of my talk, I will discuss our recent findings of state-dependent decision-making mechanisms in the striatum.
Network inference via process motifs for lagged correlation in linear stochastic processes
A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.
The role of population structure in computations through neural dynamics
Neural computations are currently investigated using two separate approaches: sorting neurons into functional subpopulations or examining the low-dimensional dynamics of collective activity. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from networks trained on neuroscience tasks, here we show that the dimensionality of the dynamics and subpopulation structure play fundamentally com- plementary roles. Although various tasks can be implemented by increasing the dimensionality in networks with fully random population structure, flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple subpopulations. Our analyses revealed that such a subpopulation structure enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics. Our results lead to task-specific predictions for the structure of neural selectivity, for inactivation experiments and for the implication of different neurons in multi-tasking.
Navigating Increasing Levels of Relational Complexity: Perceptual, Analogical, and System Mappings
Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, while Experiment 2 additionally directed participants’ attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those above chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
Learning static and dynamic mappings with local self-supervised plasticity
Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.
How Children Discover Mathematical Structure through Relational Mapping
A core question in human development is how we bring meaning to conventional symbols. This question is deeply connected to understanding how children learn mathematics—a symbol system with unique vocabularies, syntaxes, and written forms. In this talk, I will present findings from a program of research focused on children’s acquisition of place value symbols (i.e., multidigit number meanings). The base-10 symbol system presents a variety of obstacles to children, particularly in English. Children who cannot overcome these obstacles face years of struggle as they progress through the mathematics curriculum of the upper elementary and middle school grades. Through a combination of longitudinal, cross-sectional, and pretest-training-posttest approaches, I aim to illuminate relational learning mechanisms by which children sometimes succeed in mastering the place value system, as well as instructional techniques we might use to help those who do not.
CNStalk: Mapping brain function with ultra-high field MRI
Cell-type specific genomics and transcriptomics of HIV in the brain
Exploration of genome organization and function in the HIV infected brain is critical to aid in the understanding and development of treatments for HIV-associated neurocognitive disorder (HAND). Here, we applied a multiomic approach, including single nuclei transcriptomics, cell-type specific Hi-C 3D genome mapping, and viral integration site sequencing (IS-seq) to frontal lobe tissue from HIV-infected individuals with encephalitis (HIVE) and without encephalitis (HIV+). We observed reorganization of open/repressive (A/B) compartment structures in HIVE microglia encompassing 6.4% of the genome with enrichment for regions containing interferon (IFN) pathway genes. 3D genome remodeling was associated with transcriptomic reprogramming, including down-regulation of cell adhesion and synapse-related functions and robust activation of IFN signaling and cell migratory pathways, and was recapitulated by IFN-g stimulation of cultured microglial cells. Microglia from HIV+ brains showed, to a lesser extent, similar transcriptional alterations. IS-seq recovered 1,221 integration sites in the brain that were enriched for chromosomal domains newly mobilized into a permissive chromatin environment in HIVE microglia. Viral transcription, which was detected in 0.003% of all nuclei in HIVE brain, occurred in a subset of highly activated microglia that drove differential expression in HIVE. Thus, we observed a dynamic interrelationship of interferon-associated 3D genome and transcriptome remodeling with HIV integration and transcription in the brain.
Multi-muscle TMS mapping assessment of the motor cortex reorganization after finger dexterity training
It is widely known that motor learning leads to reorganization changes in the motor cortex. Recently, we have shown that using navigated transcranial magnetic stimulation (TMS) allows us to reliably trace interactions among motor cortical representations (MCRs) of different upper limb muscles. Using this approach, we investigate changes in the MCRs after fine finger movement training. Our preliminary results demonstrated that areas of the APB and ADM and their overlaps tended to increase after finger independence training. Considering the behavioral data, hand dexterity increased for both hands, but the amplitudes of voluntary contraction of the muscles for the APB and ADM did not change significantly. The behavioral results correspond with a previously described suggestion that hand strength and hand dexterity are not directly related as well as an increase in overlaps between MCRs of the trained muscles supports the idea that voluntary muscle relaxation is an active physiological process.
Unchanging and changing: hardwired taste circuits and their top-down control
The taste system detects 5 major categories of ethologically relevant stimuli (sweet, bitter, umami, sour and salt) and accordingly elicits acceptance or avoidance responses. While these taste responses are innate, the taste system retains a remarkable flexibility in response to changing external and internal contexts. Taste chemicals are first recognized by dedicated taste receptor cells (TRCs) and then transmitted to the cortex via a multi-station relay. I reasoned that if I could identify taste neural substrates along this pathway, it would provide an entry to decipher how taste signals are encoded to drive innate response and modulated to facilitate adaptive response. Given the innate nature of taste responses, these neural substrates should be genetically identifiable. I therefore exploited single-cell RNA sequencing to isolate molecular markers defining taste qualities in the taste ganglion and the nucleus of the solitary tract (NST) in the brainstem, the two stations transmitting taste signals from TRCs to the brain. How taste information propagates from the ganglion to the brain is highly debated (i.e., does taste information travel in labeled-lines?). Leveraging these genetic handles, I demonstrated one-to-one correspondence between ganglion and NST neurons coding for the same taste. Importantly, inactivating one ‘line’ did not affect responses to any other taste stimuli. These results clearly showed that taste information is transmitted to the brain via labeled lines. But are these labeled lines aptly adapted to the internal state and external environment? I studied the modulation of taste signals by conflicting taste qualities in the concurrence of sweet and bitter to understand how adaptive taste responses emerge from hardwired taste circuits. Using functional imaging, anatomical tracing and circuit mapping, I found that bitter signals suppress sweet signals in the NST via top-down modulation by taste cortex and amygdala of NST taste signals. While the bitter cortical field provides direct feedback onto the NST to amplify incoming bitter signals, it exerts negative feedback via amygdala onto the incoming sweet signal in the NST. By manipulating this feedback circuit, I showed that this top-down control is functionally required for bitter evoked suppression of sweet taste. These results illustrate how the taste system uses dedicated feedback lines to finely regulate innate behavioral responses and may have implications for the context-dependent modulation of hardwired circuits in general.
Learning in/about/from the basal ganglia
The basal ganglia are a collection of brain areas that are connected by a variety of synaptic pathways and are a site of significant reward-related dopamine release. These properties suggest a possible role for the basal ganglia in action selection, guided by reinforcement learning. In this talk, I will discuss a framework for how this function might be performed and computational results using an upward mapping to identify putative low-dimensional control ensembles that may be involved in tuning decision policy. I will also present some recent experimental results and theory – related to effects of extracellular ion dynamics -- that run counter to the classical view of basal ganglia pathways and suggest a new interpretation of certain aspects of this framework. For those not so interested in the basal ganglia, I hope that the upward mapping approach and impact of extracellular ion dynamics will nonetheless be of interest!
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
Reconstructing inhibitory circuits in a damaged brain
Inhibitory interneurons govern the sparse activation of principal cells that permits appropriate behaviors, but they among the most vulnerable to brain damage. Our recent work has demonstrated important roles for inhibitory neurons in disorders of brain development, injury and epilepsy. These studies have motivated our ongoing efforts to understand how these cells operate at the synaptic, circuit and behavioral levels and in designing new technologies targeting specific populations of interneurons for therapy. I will discuss our recent efforts examining the role of interneurons in traumatic brain injury and in designing cell transplantation strategies - based on the generation of new inhibitory interneurons - that enable precise manipulation of inhibitory circuits in the injured brain. I will also discuss our ongoing efforts using monosynaptic virus tracing and whole-brain clearing methods to generate brain-wide maps of inhibitory circuits in the rodent brain. By comprehensively mapping the wiring of individual cell types on a global scale, we have uncovered a fundamental strategy to sustain and optimize inhibition following traumatic brain injury that involves spatial reorganization of local and long-range inputs to inhibitory neurons. These recent findings suggest that brain damage, even when focally restricted, likely has a far broader affect on brain-wide neural function than previously appreciated.
Brain and Mind: Who is the Puppet and who the Puppeteer?
If the mind controls the brain, then there is free will and its corollaries, dignity and responsibility. You are king in your skull-sized kingdom and the architect of your destiny. If, on the other hand, the brain controls the mind, an incendiary conclusion follows: There can be no free will, no praise, no punishment and no purgatory. In this webinar, Professor George Paxinos will discuss his highly respected work on the construction of human and experimental animal brain atlases. He has discovered 94 brain regions, 64 homologies and published 58 books. His first book, The Rat Brain in Stereotaxic Coordinates, is the most cited publication in neuroscience and, for three decades, the third most cited book in science. Professor Paxinos will also present his recently published novel, A River Divided, which was 21 years in the making. Neuroscience principles were used in the formation of charters, such as those related to the mind, soul, free will and consciousness. Environmental issues are at the heart of the novel, including the question of whether the brain is the right ‘size’ for survival. Professor Paxinos studied at Berkeley, McGill and Yale and is now Scientia Professor of Medical Sciences at Neuroscience Research Australia and The University of New South Wales in Sydney.
The Synaptome Architecture of the Brain: Lifespan, disease, evolution and behavior
The overall aim of my research is to understand how the organisation of the synapse, with particular reference to the postsynaptic proteome (PSP) of excitatory synapses in the brain, informs the fundamental mechanisms of learning, memory and behaviour and how these mechanisms go awry in neurological dysfunction. The PSP indeed bears a remarkable burden of disease, with components being disrupted in disorders (synaptopathies) including schizophrenia, depression, autism and intellectual disability. Our work has been fundamental in revealing and then characterising the unprecedented complexity (>1000 highly conserved proteins) of the PSP in terms of the subsynaptic architecture of postsynaptic proteins such as PSD95 and how these proteins assemble into complexes and supercomplexes in different neurons and regions of the brain. Characterising the PSPs in multiple species, including human and mouse, has revealed differences in key sets of functionally important proteins, correlates with brain imaging and connectome data, and a differential distribution of disease-relevant proteins and pathways. Such studies have also provided important insight into synapse evolution, establishing that vertebrate behavioural complexity is a product of the evolutionary expansion in synapse proteomes that occurred ~500 million years ago. My lab has identified many mutations causing cognitive impairments in mice before they were found to cause human disorders. Our proteomic studies revealed that >130 brain diseases are caused by mutations affecting postsynaptic proteins. We uncovered mechanisms that explain the polygenic basis and age of onset of schizophrenia, with postsynaptic proteins, including PSD95 supercomplexes, carrying much of the polygenic burden. We discovered the “Genetic Lifespan Calendar”, a genomic programme controlling when genes are regulated. We showed that this could explain how schizophrenia susceptibility genes are timed to exert their effects in young adults. The Genes to Cognition programme is the largest genetic study so far undertaken into the synaptic molecular mechanisms underlying behaviour and physiology. We made important conceptual advances that inform how the repertoire of both innate and learned behaviours is built from unique combinations of postsynaptic proteins that either amplify or attenuate the behavioural response. This constitutes a key advance in understanding how the brain decodes information inherent in patterns of nerve impulses, and provides insight into why the PSP has evolved to be so complex, and consequently why the phenotypes of synaptopathies are so diverse. Our most recent work has opened a new phase, and scale, in understanding synapses with the first synaptome maps of the brain. We have developed next-generation methods (SYNMAP) that enable single-synapse resolution molecular mapping across the whole mouse brain and extensive regions of the human brain, revealing the molecular and morphological features of a billion synapses. This has already uncovered unprecedented spatiotemporal synapse diversity organised into an architecture that correlates with the structural and functional connectomes, and shown how mutations that cause cognitive disorders reorganise these synaptome maps; for example, by detecting vulnerable synapse subtypes and synapse loss in Alzheimer’s disease. This innovative synaptome mapping technology has huge potential to help characterise how the brain changes during normal development, including in specific cell types, and with degeneration, facilitating novel pathways to diagnosis and therapy.
Mapping the Dynamics of the Linear and 3D Genome of Single Cells in the Developing Brain
Three intimately related dimensions of the mammalian genome—linear DNA sequence, gene transcription, and 3D genome architecture—are crucial for the development of nervous systems. Changes in the linear genome (e.g., de novo mutations), transcriptome, and 3D genome structure lead to debilitating neurodevelopmental disorders, such as autism and schizophrenia. However, current technologies and data are severely limited: (1) 3D genome structures of single brain cells have not been solved; (2) little is known about the dynamics of single-cell transcriptome and 3D genome after birth; (3) true de novo mutations are extremely difficult to distinguish from false positives (DNA damage and/or amplification errors). Here, I filled in this longstanding technological and knowledge gap. I recently developed a high-resolution method—diploid chromatin conformation capture (Dip-C)—which resolved the first 3D structure of the human genome, tackling a longstanding problem dating back to the 1880s. Using Dip-C, I obtained the first 3D genome structure of a single brain cell, and created the first transcriptome and 3D genome atlas of the mouse brain during postnatal development. I found that in adults, 3D genome “structure types” delineate all major cell types, with high correlation between chromatin A/B compartments and gene expression. During development, both transcriptome and 3D genome are extensively transformed in the first month of life. In neurons, 3D genome is rewired across scales, correlated with gene expression modules, and independent of sensory experience. Finally, I examined allele-specific structure of imprinted genes, revealing local and chromosome-wide differences. More recently, I expanded my 3D genome atlas to the human and mouse cerebellum—the most consistently affected brain region in autism. I uncovered unique 3D genome rewiring throughout life, providing a structural basis for the cerebellum’s unique mode of development and aging. In addition, to accurately measure de novo mutations in a single cell, I developed a new method—multiplex end-tagging amplification of complementary strands (META-CS), which eliminates nearly all false positives by virtue of DNA complementarity. Using META-CS, I determined the true mutation spectrum of single human brain cells, free from chemical artifacts. Together, my findings uncovered an unknown dimension of neurodevelopment, and open up opportunities for new treatments for autism and other developmental disorders.
Mapping Individual Trajectories of Structural and Cognitive Decline in Mild Cognitive Impairment
The US has an aging population. For the first time in US history, the number of older adults is projected to outnumber that of children by 2034. This combined with the fact that the prevalence of Alzheimer's Disease increases exponentially with age makes for a worrying combination. Mild cognitive impairment (MCI) is an intermediate stage of cognitive decline between being cognitively normal and having full-blown Dementia, with every third person with MCI progressing to dementia of the Alzheimer's Type (DAT). While there is no known way to reverse symptoms once they begin, early prediction of disease can help stall its progression and help with early financial planning. While grey matter volume loss in the Hippocampus and Entorhinal Cortex (EC) are characteristic biomarkers of DAT, little is known about the rates of decrease of these volumes within individuals in MCI state across time. We used longitudinal growth curve models to map individual trajectories of volume loss in subjects with MCI. We then looked at whether these rates of volume decrease could predict progression to DAT right in the MCI stage. Finally, we evaluated whether these rates of Hippocampal and EC volume loss were correlated with individual rates of decline of episodic memory, visuospatial ability, and executive function.
Neural cartography: Mapping the brain with X-ray and electron microscopy
Implementing structure mapping as a prior in deep learning models for abstract reasoning
Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.
Dissecting the 3D regulatory landscape of the developing cerebral cortex with single-cell epigenomics
Understanding how different epigenetic layers are coordinated to facilitate robust lineage decisions during development is one of the fundamental questions in regulatory genomics. Using single-cell epigenomics coupled with cell-type specific high-throughput mapping of enhancer activity, DNA methylation and the 3D genome landscape in vivo, we dissected how the epigenome is rewired during cortical development. We identified and functionally validated key transcription factors such as Neurog2 which underlie regulatory dynamics and coordinate rewiring across multiple epigenetic layers to ensure robust lineage specification. This work showcases the power of high-throughput integrative genomics to dissect the molecular rules of cell fate decisions in the brain and more broadly, how to apply them to evolution and disease.
Cross-modality imaging of the neural systems that support executive functions
Executive functions refer to a collection of mental processes such as attention, planning and problem solving, supported by a frontoparietal distributed brain network. These functions are essential for everyday life. Specifically in the context of patients with brain tumours there is a need to preserve them in order to enable good quality of life for patients. During surgeries for the removal of a brain tumour, the aim is to remove as much as possible of the tumour and at the same time prevent damage to the areas around it to preserve function and enable good quality of life for patients. In many cases, functional mapping is conducted during an awake surgery in order to identify areas critical for certain functions and avoid their surgical resection. While mapping is routinely done for functions such as movement and language, mapping executive functions is more challenging. Despite growing recognition in the importance of these functions for patient well-being in recent years, only a handful of studies addressed their intraoperative mapping. In the talk, I will present our new approach for mapping executive function areas using electrocorticography during awake brain surgery. These results will be complemented by neuroimaging data from healthy volunteers, directed at reliably localizing executive function regions in individuals using fMRI. I will also discuss more broadly challenges ofß using neuroimaging for neurosurgical applications. We aim to advance cross-modality neuroimaging of cognitive function which is pivotal to patient-tailored surgical interventions, and will ultimately lead to improved clinical outcomes.
Mapping microglia states and function in health and disease
The pervasive role of visuospatial coding
Historically, retinotopic organisation (the spatial mapping of the retina across the cortical surface) was considered the purview of early regions of visual cortex (V1-V4) only and that anterior, more cognitively involved regions abstracted this information away. The contemporary view is quite different. Here, with Advancing technologies and analysis methods, we see that retinotopic information is not simply thrown away by these regions but rather is maintained to the potential benefit of our broader cognition. This maintenance of visuospatial coding extends not only through visual cortex, but is present in parietal, frontal, medial and subcortical structures involved with coordinating-movements, mind-wandering and even memory. In this talk, I will outline some of the key empirical findings from my own work and the work of others that shaped this contemporary perspective.
A novel form of retinotopy in area V2 highlights location-dependent feature selectivity in the visual system
Topographic maps are a prominent feature of brain organization, reflecting local and large-scale representation of the sensory surface. Traditionally, such representations in early visual areas are conceived as retinotopic maps preserving ego-centric retinal spatial location while ensuring that other features of visual input are uniformly represented for every location in space. I will discuss our recent findings of a striking departure from this simple mapping in the secondary visual area (V2) of the tree shrew that is best described as a sinusoidal transformation of the visual field. This sinusoidal topography is ideal for achieving uniform coverage in an elongated area like V2 as predicted by mathematical models designed for wiring minimization, and provides a novel explanation for stripe-like patterns of intra-cortical connections and functional response properties in V2. Our findings suggest that cortical circuits flexibly implement solutions to sensory surface representation, with dramatic consequences for large-scale cortical organization. Furthermore our work challenges the framework of relatively independent encoding of location and features in the visual system, showing instead location-dependent feature sensitivity produced by specialized processing of different features in different spatial locations. In the second part of the talk, I will propose that location-dependent feature sensitivity is a fundamental organizing principle of the visual system that achieves efficient representation of positional regularities in visual input, and reflects the evolutionary selection of sensory and motor circuits to optimally represent behaviorally relevant information. The relevant papers can be found here: V2 retinotopy (Sedigh-Sarvestani et al. Neuron, 2021) Location-dependent feature sensitivity (Sedigh-Sarvestani et al. Under Review, 2022)
Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex
During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.
Mice identify subgoals locations through an action-driven mapping process
Mammals instinctively explore and form mental maps of their spatial environments. Models of cognitive mapping in neuroscience mostly depict map-learning as a process of random or biased diffusion. In practice, however, animals explore spaces using structured, purposeful, sensory-guided actions. We have used threat-evoked escape behavior in mice to probe the relationship between ethological exploratory behavior and abstract spatial cognition. First, we show that in arenas with obstacles and a shelter, mice spontaneously learn efficient multi-step escape routes by memorizing allocentric subgoal locations. Using closed-loop neural manipulations to interrupt running movements during exploration, we next found that blocking runs targeting an obstacle edge abolished subgoal learning. We conclude that mice use an action-driven learning process to identify subgoals, and these subgoals are then integrated into an allocentric map-like representation. We suggest a conceptual framework for spatial learning that is compatible with the successor representation from reinforcement learning and sensorimotor enactivism from cognitive science.
Spatial alignment supports visual comparisons
Visual comparisons are ubiquitous, and they can also be an important source for learning (e.g., Gentner et al., 2016; Kok et al., 2013). In science, technology, engineering, and math (STEM), key information is often conveyed through figures, graphs, and diagrams (Mayer, 1993). Comparing within and across visuals is critical for gleaning insight into the underlying concepts, structures, and processes that they represent. This talk addresses how people make visual comparisons and how visual comparisons can be best supported to improve learning. In particular, the talk will present a series of studies exploring the Spatial Alignment Principle (Matlen et al., 2020), derived from Structure-Mapping Theory (Gentner, 1983). Structure-mapping theory proposes that comparisons involve a process of finding correspondences between elements based on structured relationships. The Spatial Alignment Principle suggests that spatially arranging compared figures directly – to support correct correspondences and minimize interference from incorrect correspondences – will facilitate visual comparisons. We find that direct placement can facilitate visual comparison in educationally relevant stimuli, and that it may be especially important when figures are less familiar. We also present complementary evidence illustrating the preponderance of visual comparisons in 7th grade science textbooks.
NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing
A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.
The bounded rationality of probability distortion
In decision-making under risk (DMR) participants' choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, why do we systematically fail in our use of probability and relative frequency information? We propose a Bounded Log-Odds Model (BLO) of probability and relative frequency distortion based on three assumptions: (1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, (2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and (3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values. We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as eleven alternative models each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature. We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.
Reflex Regulation of Innate Immunity
Reflex circuits in the nervous system integrate changes in the environment with physiology. Compact clusters of brain neuron cell bodies, termed nuclei, are essential for receiving sensory input and for transmitting motor outputs to the body. These nucelii are critical relay stations which process incoming information and convert these signals to outgoing action potentials which regulate immune system functions. Thus, reflex neural circuits maintain parameters of immunological physiology within a narrow range optimal for health. Advances in neuroscience and immunology using optogenetics, pharmacogenetics, and functional mapping offer a new understanding of the importance of neural circuitry underlying immunity, and offer direct paths to new therapies.
Population dynamics of the thalamic head direction system during drift and reorientation
The head direction (HD) system is classically modeled as a ring attractor network which ensures a stable representation of the animal’s head direction. This unidimensional description popularized the view of the HD system as the brain’s internal compass. However, unlike a globally consistent magnetic compass, the orientation of the HD system is dynamic, depends on local cues and exhibits remapping across familiar environments5. Such a system requires mechanisms to remember and align to familiar landmarks, which may not be well described within the classic 1-dimensional framework. To search for these mechanisms, we performed large population recordings of mouse thalamic HD cells using calcium imaging, during controlled manipulations of a visual landmark in a familiar environment. First, we find that realignment of the system was associated with a continuous rotation of the HD network representation. The speed and angular distance of this rotation was predicted by a 2nd dimension to the ring attractor which we refer to as network gain, i.e. the instantaneous population firing rate. Moreover, the 360-degree azimuthal profile of network gain, during darkness, maintained a ‘memory trace’ of a previously displayed visual landmark. In a 2nd experiment, brief presentations of a rotated landmark revealed an attraction of the network back to its initial orientation, suggesting a time-dependent mechanism underlying the formation of these network gain memory traces. Finally, in a 3rd experiment, continuous rotation of a visual landmark induced a similar rotation of the HD representation which persisted following removal of the landmark, demonstrating that HD network orientation is subject to experience-dependent recalibration. Together, these results provide new mechanistic insights into how the neural compass flexibly adapts to environmental cues to maintain a reliable representation of the head direction.
Beyond the binding problem: From basic affordances to symbolic thought
Human cognitive abilities seem qualitatively different from the cognitive abilities of other primates, a difference Penn, Holyoak, and Povinelli (2008) attribute to role-based relational reasoning—inferences and generalizations based on the relational roles to which objects (and other relations) are bound, rather than just the features of the objects themselves. Role-based relational reasoning depends on the ability to dynamically bind arguments to relational roles. But dynamic binding cannot be sufficient for relational thinking: Some non-human animals solve the dynamic binding problem, at least in some domains; and many non-human species generalize affordances to completely novel objects and scenes, a kind of universal generalization that likely depends on dynamic binding. If they can solve the dynamic binding problem, then why can they not reason about relations? What are they missing? I will present simulations with the LISA model of analogical reasoning (Hummel & Holyoak, 1997, 2003) suggesting that the missing pieces are multi-role integration (the capacity to combine multiple role bindings into complete relations) and structure mapping (the capacity to map different systems of role bindings onto one another). When LISA is deprived of either of these capacities, it can still generalize affordances universally, but it cannot reason symbolically; granted both abilities, LISA enjoys the full power of relational (symbolic) thought. I speculate that one reason it may have taken relational reasoning so long to evolve is that it required evolution to solve both problems simultaneously, since neither multi-role integration nor structure mapping appears to confer any adaptive advantage over simple role binding on its own.
The Challenge and Opportunities of Mapping Cortical Layer Activity and Connectivity with fMRI
In this talk I outline the technical challenges and current solutions to layer fMRI. Specifically, I describe our acquisition strategies for maximizing resolution, spatial coverage, time efficiency as well as, perhaps most importantly, vascular specificity. Novel applications from our group, including mapping feedforward and feedback connections to M1 during task and sensory input modulation and S1 during a sensory prediction task are be shown. Layer specific activity in dorsal lateral prefrontal cortex during a working memory task is also demonstrated. Additionally, I’ll show preliminary work on mapping whole brain layer-specific resting state connectivity and hierarchy.
Goal-directed remapping of enthorhinal cortex neural coding
COSYNE 2022
Goal-directed remapping of enthorhinal cortex neural coding
COSYNE 2022
Mice identify subgoal locations through an action-driven mapping process
COSYNE 2022
Mice identify subgoal locations through an action-driven mapping process
COSYNE 2022
Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction
COSYNE 2022
Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction
COSYNE 2022
Optogenetic mapping of circuit connectivity in the motor cortex during goal-directed behavior
COSYNE 2022
Optogenetic mapping of circuit connectivity in the motor cortex during goal-directed behavior
COSYNE 2022
An accessible hippocampal dataset for benchmarking models of cognitive mapping
COSYNE 2023
An attractive manifold of retinotopic map in a network model explains presaccadic receptive field remapping
COSYNE 2025
How internal states shape sensorimotor mapping in zebrafish larvae
COSYNE 2025
Mapping functional differences across cell types using a group embedding-enhanced transformer
COSYNE 2025
Mapping social perception to social behavior using artificial neural networks
COSYNE 2025
Mixtures of decoders for interpreting dynamic neural-behavioral mappings
COSYNE 2025
Simultaneous detection and mapping in the olfactory bulb
COSYNE 2025
State-dependent mapping of correlations of subthreshold to spiking activity is expansive in L1 inhibitory circuits
COSYNE 2025
A unifying theory of hippocampal remapping through the lens of contextual inference
COSYNE 2025
Unpredictable Rewards, Predictable Maps: Reward uncertainty induces hippocampal place cell remapping in parallel reference-frames
COSYNE 2025
All-optical mapping of feedback and sensory-evoked synaptic inputs to pyramidal neurons in the mouse primary somatosensory cortex
FENS Forum 2024
Astrocyte-originated connection mapping of presynaptic neurons using the rabies virus tracer
FENS Forum 2024
Brain-wide monosynaptic connectivity mapping with ROInet-seq
FENS Forum 2024
Mapping information flow between striatum and motor cortex during skill learning
FENS Forum 2024
Cortical activations associated with spatial remapping of finger touch using HR-EEG
FENS Forum 2024
A data-driven approach for the single-subject mapping of natural frequencies
FENS Forum 2024
ECoG-based functional mapping of the motor cortex in rhesus monkeys
FENS Forum 2024
Evaluation of CA3 place cell remapping in the APP/PS1 model mouse of Alzheimer’s disease
FENS Forum 2024
Fate mapping the earliest subset of neurons in the mouse cerebellar nuclei
FENS Forum 2024
fMRI mapping of brain circuits during simple sound perception by awake rats
FENS Forum 2024
Functional mapping of brain pathways involved in the gut microbial modulation of social behaviour
FENS Forum 2024
Functional ultrasound mapping of large-scale connectivity networks in the mouse brain
FENS Forum 2024
Global brain c-Fos mapping reveals differences in brain network engagement during navigation using different visual cue classes
FENS Forum 2024
High-throughput neural connectivity mapping in human brain organoids
FENS Forum 2024
High-resolution 3D mapping of cat visual cortex using functional ultrasound imaging
FENS Forum 2024
High sensitivity mapping of brain-wide functional networks in awake mice using simultaneous multi-slice fUS imaging
FENS Forum 2024
Identifiable attribution maps with contrastive learning for mapping neurons to neural latent dynamics
FENS Forum 2024
Large-scale histological 3D mapping of neurodegenerative and psychiatric disorders in the human brain
FENS Forum 2024
Mapping the cell state landscape of autism spectrum disorders
FENS Forum 2024
Mapping cerebellar motor and cognitive deficits after tumor resection in children
FENS Forum 2024
Mapping cortical input into the brainstem: The function of cortico-brainstem neurons in skilled motor control
FENS Forum 2024
Mapping functional neuronal ensembles in premotor cortex during complex, goal-directed behaviour
FENS Forum 2024