Same
same
Spike train structure of cortical transcriptomic populations in vivo
The cortex comprises many neuronal types, which can be distinguished by their transcriptomes: the sets of genes they express. Little is known about the in vivo activity of these cell types, particularly as regards the structure of their spike trains, which might provide clues to cortical circuit function. To address this question, we used Neuropixels electrodes to record layer 5 excitatory populations in mouse V1, then transcriptomically identified the recorded cell types. To do so, we performed a subsequent recording of the same cells using 2-photon (2p) calcium imaging, identifying neurons between the two recording modalities by fingerprinting their responses to a “zebra noise” stimulus and estimating the path of the electrode through the 2p stack with a probabilistic method. We then cut brain slices and performed in situ transcriptomics to localize ~300 genes using coppaFISH3d, a new open source method, and aligned the transcriptomic data to the 2p stack. Analysis of the data is ongoing, and suggests substantial differences in spike time coordination between ET and IT neurons, as well as between transcriptomic subtypes of both these excitatory types.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
Neural makers of lapses in attention during sustained ‘real-world’ task performance
Lapses in attention are ubiquitous and, unfortunately, the cause of many tragic accidents. One potential solution may be to develop assistance systems which can use objective, physiological signals to monitor attention levels and predict a lapse in attention before it occurs. As it stands, it is unclear which physiological signals are the most reliable markers of inattention, and even less is known about how reliably they will work in a more naturalistic setting. My project aims to address these questions across two experiments: a lab-based experiment and a more ‘real-world’ experiment. In this talk I will present the findings from my lab experiment, in which we combined EEG and pupillometry to detect markers of inattention during two computerised sustained attention tasks. I will then present the methods for my second, more ‘naturalistic’ experiment in which we use the same methods (EEG and pupillometry) to examine whether these markers can still be extracted from noisier data.
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
Distinctive features of experiential time: Duration, speed and event density
William James’s use of “time in passing” and “stream of thoughts” may be two sides of the same coin that emerge from the brain segmenting the continuous flow of information into discrete events. Departing from that idea, we investigated how the content of a realistic scene impacts two distinct temporal experiences: the felt duration and the speed of the passage of time. I will present you the results from an online study in which we used a well-established experimental paradigm, the temporal bisection task, which we extended to passage of time judgments. 164 participants classified seconds-long videos of naturalistic scenes as short or long (duration), or slow or fast (passage of time). Videos contained a varying number and type of events. We found that a large number of events lengthened subjective duration and accelerated the felt passage of time. Surprisingly, participants were also faster at estimating their felt passage of time compared to duration. The perception of duration heavily depended on objective duration, whereas the felt passage of time scaled with the rate of change. Altogether, our results support a possible dissociation of the mechanisms underlying the two temporal experiences.
Seizure control by electrical stimulation: parameters and mechanisms
Seizure suppression by deep brain stimulation (DBS) applies high frequency stimulation (HFS) to grey matter to block seizures. In this presentation, I will present the results of a different method that employs low frequency stimulation (LFS) (1 to 10Hz) of white matter tracts to prevent seizures. The approach has been shown to be effective in the hippocampus by stimulating the ventral and dorsal hippocampal commissure in both animal and human studies respectively for mesial temporal lobe seizures. A similar stimulation paradigm has been shown to be effective at controlling focal cortical seizures in rats with corpus callosum stimulation. This stimulation targets the axons of the corpus callosum innervating the focal zone at low frequencies (5 to 10Hz) and has been shown to significantly reduce both seizure and spike frequency. The mechanisms of this suppression paradigm have been elucidated with in-vitro studies and involve the activation of two long-lasting inhibitory potentials GABAB and sAHP. LFS mechanisms are similar in both hippocampus and cortical brain slices. Additionally, the results show that LFS does not block seizures but rather decreases the excitability of the tissue to prevent seizures. Three methods of seizure suppression, LFS applied to fiber tracts, HFS applied to focal zone and stimulation of the anterior nucleus of the thalamus (ANT) were compared directly in the same animal in an in-vivo epilepsy model. The results indicate that LFS generated a significantly higher level of suppression, indicating LFS of white matter tract could be a useful addition as a stimulation paradigm for the treatment of epilepsy.
The Role of Spatial and Contextual Relations of real world objects in Interval Timing
In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Great ape interaction: Ladyginian but not Gricean
Non-human great apes inform one another in ways that can seem very humanlike. Especially in the gestural domain, their behavior exhibits many similarities with human communication, meeting widely used empirical criteria for intentionality. At the same time, there remain some manifest differences. How to account for these similarities and differences in a unified way remains a major challenge. This presentation will summarise the arguments developed in a recent paper with Christophe Heintz. We make a key distinction between the expression of intentions (Ladyginian) and the expression of specifically informative intentions (Gricean), and we situate this distinction within a ‘special case of’ framework for classifying different modes of attention manipulation. The paper also argues that the attested tendencies of great ape interaction—for instance, to be dyadic rather than triadic, to be about the here-and-now rather than ‘displaced’—are products of its Ladyginian but not Gricean character. I will reinterpret video footage of great ape gesture as Ladyginian but not Gricean, and distinguish several varieties of meaning that are continuous with one another. We conclude that the evolutionary origins of linguistic meaning lie in gradual changes in not communication systems as such, but rather in social cognition, and specifically in what modes of attention manipulation are enabled by a species’ cognitive phenotype: first Ladyginian and in turn Gricean. The second of these shifts rendered humans, and only humans, ‘language ready’.
Movements and engagement during decision-making
When experts are immersed in a task, a natural assumption is that their brains prioritize task-related activity. Accordingly, most efforts to understand neural activity during well-learned tasks focus on cognitive computations and task-related movements. Surprisingly, we observed that during decision-making, the cortex-wide activity of multiple cell types is dominated by movements, especially “uninstructed movements”, that are spontaneously expressed. These observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity. To understand the relationship between these movements and decision-making, we examined the movements more closely. We tested whether the magnitude or the timing of the movements was correlated with decision-making performance. To do this, we partitioned movements into two groups: task-aligned movements that were well predicted by task events (such as the onset of the sensory stimulus or choice) and task independent movement (TIM) that occurred independently of task events. TIM had a reliable, inverse correlation with performance in head-restrained mice and freely moving rats. This hinted that the timing of spontaneous movements could indicate periods of disengagement. To confirm this, we compared TIM to the latent behavioral states recovered by a hidden Markov model with Bernoulli generalized linear model observations (GLM-HMM) and found these, again, to be inversely correlated. Finally, we examined the impact of these behavioral states on neural activity. Surprisingly, we found that the same movement impacts neural activity more strongly when animals are disengaged. An intriguing possibility is that these larger movement signals disrupt cognitive computations, leading to poor decision-making performance. Taken together, these observations argue that movements and cognitionare closely intertwined, even during expert decision-making.
Identifying mechanisms of cognitive computations from spikes
Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss a recent paper by Taylor et al. (2023): https://www.sciencedirect.com/science/article/pii/S1053811923002896. They discuss the merits of highlighting results instead of hiding them; that is, clearly marking which voxels and clusters pass a given significance threshold, but still highlighting sub-threshold results, with opacity proportional to the strength of the effect. They use this to illustrate how there in fact may be more agreement between researchers than previously thought, using the NARPS dataset as an example. By adopting a continuous, "highlighted" approach, it becomes clear that the majority of effects are in the same location and that the effect size is in the same direction, compared to an approach that only permits rejecting or not rejecting the null hypothesis. We will also talk about the implications of this approach for creating figures, detecting artifacts, and aiding reproducibility.
Social and non-social learning: Common, or specialised, mechanisms? (BACN Early Career Prize Lecture 2022)
The last decade has seen a burgeoning interest in studying the neural and computational mechanisms that underpin social learning (learning from others). Many findings support the view that learning from other people is underpinned by the same, ‘domain-general’, mechanisms underpinning learning from non-social stimuli. Despite this, the idea that humans possess social-specific learning mechanisms - adaptive specializations moulded by natural selection to cope with the pressures of group living - persists. In this talk I explore the persistence of this idea. First, I present dissociations between social and non-social learning - patterns of data which are difficult to explain under the domain-general thesis and which therefore support the idea that we have evolved special mechanisms for social learning. Subsequently, I argue that most studies that have dissociated social and non-social learning have employed paradigms in which social information comprises a secondary, additional, source of information that can be used to supplement learning from non-social stimuli. Thus, in most extant paradigms, social and non-social learning differ both in terms of social nature (social or non-social) and status (primary or secondary). I conclude that status is an important driver of apparent differences between social and non-social learning. When we account for differences in status, we see that social and non-social learning share common (dopamine-mediated) mechanisms.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Despite her still poor visual acuity and minimal visual experience, a 2-3 month old baby will reliably respond to facial expressions, smiling back at her caretaker or older sibling. But what if that same baby had been deprived of her early visual experience? Will she be able to appropriately respond to seemingly mundane interactions, such as a peer’s facial expression, if she begins seeing at the age of 10? My work is part of Project Prakash, a dual humanitarian/scientific mission to identify and treat curably blind children in India and then study how their brain learns to make sense of the visual world when their visual journey begins late in life. In my talk, I will give a brief overview of Project Prakash, and present findings from one of my primary lines of research: plasticity of face perception with late sight onset. Specifically, I will discuss a mixed methods effort to probe and explain the differential windows of plasticity that we find across different aspects of distributed face recognition, from distinguishing a face from a nonface early in the developmental trajectory, to recognizing facial expressions, identifying individuals, and even identifying one’s own caretaker. I will draw connections between our empirical findings and our recent theoretical work hypothesizing that children with late sight onset may suffer persistent face identification difficulties because of the unusual acuity progression they experience relative to typically developing infants. Finally, time permitting, I will point to potential implications of our findings in supporting newly-sighted children as they transition back into society and school, given that their needs and possibilities significantly change upon the introduction of vision into their lives.
The development of visual experience
Vision and visual cognition is experience-dependent with likely multiple sensitive periods, but we know very little about statistics of visual experience at the scale of everyday life and how they might change with development. By traditional assumptions, the world at the massive scale of daily life presents pretty much the same visual statistics to all perceivers. I will present an overview our work on ego-centric vision showing that this is not the case. The momentary image received at the eye is spatially selective, dependent on the location, posture and behavior of the perceiver. If a perceiver’s location, possible postures and/or preferences for looking at some kinds of scenes over others are constrained, then their sampling of images from the world and thus the visual statistics at the scale of daily life could be biased. I will present evidence with respect to both low-level and higher level visual statistics about the developmental changes in the visual input over the first 18 months post-birth.
The Geometry of Decision-Making
Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Off-policy learning in the basal ganglia
I will discuss work with Jack Lindsey modeling reinforcement learning for action selection in the basal ganglia. I will argue that the presence of multiple brain regions, in addition to the basal ganglia, that contribute to motor control motivates the need for an off-policy basal ganglia learning algorithm. I will then describe a biological implementation of such an algorithm that predicts tuning of dopamine neurons to a quantity we call "action surprise," in addition to reward prediction error. In the same model, an implementation of learning from a motor efference copy also predicts a novel solution to the problem of multiplexing feedforward and efference-related striatal activity. The solution exploits the difference between D1 and D2-expressing medium spiny neurons and leads to predictions about striatal dynamics.
Development of an open-source femtosecond fiber laser system for multiphoton microscopy
This talk will present a low-cost protocol for fabricating an easily constructed femtosecond (fs) fiber laser system suitable for routine multiphoton microscopy (1060–1080 nm, 1 W average power, 70 fs pulse duration, 30–70 MHz repetition rate). Concepts well-known in the laser physics community essential to proper laser operation, but generally obscure to biophysicists and biomedical engineers, will be clarified. The parts list (~$13K US dollars), the equipment list (~$40K+), and the intellectual investment needed to build the laser will be described. A goal of the presentation will be to engage with the audience to discuss trade-offs associated with a custom-built fs fiber laser versus purchasing a commercial system. I will also touch on my research group’s plans to further develop this custom laser system for multiplexed cancer imaging as well as recent developments in the field that promise even higher performance fs fiber lasers for approximately the same cost and ease of construction.
A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time
Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.
Relations and Predictions in Brains and Machines
Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.
Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity
During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.
Developmentally structured coactivity in the hippocampal trisynaptic loop
The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.
Explaining an asymmetry in similarity and difference judgments
Explicit similarity judgments tend to emphasize relational information more than do difference judgments. In this talk, I propose and test the hypothesis that this asymmetry arises because human reasoners represent the relation different as the negation of the relation same (i.e., as not-same). This proposal implies that processing difference is more cognitively demanding than processing similarity. Both for verbal comparisons between word pairs, and for visual comparisons between sets of geometric shapes, participants completed a triad task in which they selected which of two options was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard, either by virtue of featural similarity or by virtue of relational similarity. On ambiguous trials, one option was more featurally similar (but less relationally similar) to the standard, whereas the other was more relationally similar (but less featurally similar). Given the higher cognitive complexity of assessing relational similarity, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty accurately detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments. These results provide convergent evidence for a representational asymmetry between the relations same and different.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Integration of 3D human stem cell models derived from post-mortem tissue and statistical genomics to guide schizophrenia therapeutic development
Schizophrenia is a neuropsychiatric disorder characterized by positive symptoms (such as hallucinations and delusions), negative symptoms (such as avolition and withdrawal) and cognitive dysfunction1. Schizophrenia is highly heritable, and genetic studies are playing a pivotal role in identifying potential biomarkers and causal disease mechanisms with the hope of informing new treatments. Genome-wide association studies (GWAS) identified nearly 270 loci with a high statistical association with schizophrenia risk; however each locus confers only a small increase in risk therefore it is difficult to translate these findings into understanding disease biology that can lead to treatments. Induced pluripotent stem cell (iPSC) models are a tractable system to translate genetic findings and interrogate mechanisms of pathogenesis. Mounting research with patient-derived iPSCs has proposed several neurodevelopmental pathways altered in SCZ, such as neural progenitor cell (NPC) proliferation, imbalanced differentiation of excitatory and inhibitory cortical neurons. However, it is unclear what exactly these iPS models recapitulate, how potential perturbations of early brain development translates into illness in adults and how iPS models that represent fetal stages can be utilized to further drug development efforts to treat adult illness. I will present the largest transcriptome analysis of post-mortem caudate nucleus in schizophrenia where we discovered that decreased presynaptic DRD2 autoregulation is the causal dopamine risk factor for schizophrenia (Benjamin et al, Nature Neuroscience 2022 https://doi.org/10.1038/s41593-022-01182-7). We developed stem cell models from a subset of the postmortem cohort to better understand the molecular underpinnings of human psychiatric disorders (Sawada et al, Stem Cell Research 2020). We established a method for the differentiation of iPS cells into ventral forebrain organoids and performed single cell RNAseq and cellular phenotyping. To our knowledge, this is the first study to evaluate iPSC models of SZ from the same individuals with postmortem tissue. Our study establishes that striatal neurons in the patients with SCZ carry abnormalities that originated during early brain development. Differentiation of inhibitory neurons is accelerated whereas excitatory neuronal development is delayed, implicating an excitation and inhibition (E-I) imbalance during early brain development in SCZ. We found a significant overlap of genes upregulated in the inhibitory neurons in SCZ organoids with upregulated genes in postmortem caudate tissues from patients with SCZ compared with control individuals, including the donors of our iPS cell cohort. Altogether, we demonstrate that ventral forebrain organoids derived from postmortem tissue of individuals with schizophrenia recapitulate perturbed striatal gene expression dynamics of the donors’ brains (Sawada et al, biorxiv 2022 https://doi.org/10.1101/2022.05.26.493589).
Verb metaphors are processed as analogies
Metaphor is a pervasive phenomenon in language and cognition. To date, the vast majority of psycholinguistic research on metaphor has focused on noun-noun metaphors of the form An X is a Y (e.g., My job is a jail). Yet there is evidence that verb metaphor (e.g., I sailed through my exams) is more common. Despite this, comparatively little work has examined how verb metaphors are processed. In this talk, I will propose a novel account for verb metaphor comprehension: verb metaphors are understood in the same way that analogies are—as comparisons processed via structure-mapping. I will discuss the predictions that arise from applying the analogical framework to verb metaphor and present a series of experiments showing that verb metaphoric extension is consistent with those predictions.
Integrative Neuromodulation: from biomarker identification to optimizing neuromodulation
Why do we make decisions impulsively blinded in an emotionally rash moment? Or caught in the same repetitive suboptimal loop, avoiding fears or rushing headlong towards illusory rewards? These cognitive constructs underlying self-control and compulsive behaviours and their influence by emotion or incentives are relevant dimensionally across healthy individuals and hijacked across disorders of addiction, compulsivity and mood. My lab focuses on identifying theory-driven modifiable biomarkers focusing on these cognitive constructs with the ultimate goal to optimize and develop novel means of neuromodulation. Here I will provide a few examples of my group’s recent work to illustrate this approach. I describe a series of recent studies on intracranial physiology and acute stimulation focusing on risk taking and emotional processing. This talk highlights the subthalamic nucleus, a common target for deep brain stimulation for Parkinson’s disease and obsessive-compulsive disorder. I further describe recent translational work in non-invasive neuromodulation. Together these examples illustrate the approach of the lab highlighting modifiable biomarkers and optimizing neuromodulation.
Fidelity and Replication: Modelling the Impact of Protocol Deviations on Effect Size
Cognitive science and cognitive neuroscience researchers have agreed that the replication of findings is important for establishing which ideas (or theories) are integral to the study of cognition across the lifespan. Recently, high-profile papers have called into question findings that were once thought to be unassailable. Much attention has been paid to how p-hacking, publication bias, and sample size are responsible for failed replications. However, much less attention has been paid to the fidelity by which researchers enact study protocols. Researchers conducting education or clinical trials are aware of the importance in fidelity – or the extent to which the protocols are delivered in the same way across participants. Nevertheless, this idea has not been applied to cognitive contexts. This seminar discusses factors that impact the replicability of findings alongside recent models suggesting that even small fidelity deviations have real impacts on the data collected.
Convex neural codes in recurrent networks and sensory systems
Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
Neural networks in the replica-mean field limits
In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.
On the link between conscious function and general intelligence in humans and machines
In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
Navigating Increasing Levels of Relational Complexity: Perceptual, Analogical, and System Mappings
Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, while Experiment 2 additionally directed participants’ attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those above chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
Root causes and possible solutions to academic bullying in higher education
Academic bullying is a serious issue that affects all disciplines and people of all levels of experience. To create a truly safe, productive, and vibrant environment in academia requires coordinated and collaborative input as well as the action of a variety of stakeholders, including scholarly communities, funding agencies, and institutions. This talk will focus on a framework of integrated responding, in which stakeholders as responsible and response-able parties could proactively collaborate and coordinate to reduce the incidence and consequences of academic bullying while at the same time building constructive academic cultures. The outcome of such a framework would be to create novel entities and actions that accelerate successful responses to academic bullying.
Time as its own representation? Exploring a link between timing of cognition and time perception
The way we represent and perceive time has crucial implications for studying temporality in conscious experience. Contrasting positions posit that temporal information is separately abstracted out like any other perceptual property, or that time is represented through representations having temporal properties themselves. To add to this debate, we investigated alterations in felt time in conditions where only conscious visual experience is altered while a bistable figure remains physically unchanged. In this talk, I will discuss two studies that we have done in relation to answering this question. In study 1, we investigated whether perceptual switches in fixed intervals altered felt time. In three experiments we showed that a break in visual experience (via a perceptual switch) also leads to a break in felt time. In study 2, we are currently looking at figure-ground perception in ambigous displays. Here, in experiment 1 we show that differences in flicker frequencies on ambigous regions can induce figure-ground segregation. To see if a reverse complementarity exists for felt time, we ask participants to view ambigous regions as figure/ground and show that they have different temporal resolutions for the same region based on whether it is seen as figure or background. Overall, the two studies provide evidence for temporal mirroring and isomorphism in visual experience, arguing for a link between the timing of experience and time perception.
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
Learning static and dynamic mappings with local self-supervised plasticity
Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.
Flexible multitask computation in recurrent networks utilizes shared dynamical motifs
Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.
Odd dynamics of living chiral crystals
The emergent dynamics exhibited by collections of living organisms often shows signatures of symmetries that are broken at the single-organism level. At the same time, organism development itself encompasses a well-coordinated sequence of symmetry breaking events that successively transform a single, nearly isotropic cell into an animal with well-defined body axis and various anatomical asymmetries. Combining these key aspects of collective phenomena and embryonic development, we describe here the spontaneous formation of hydrodynamically stabilized active crystals made of hundreds of starfish embryos that gather during early development near fluid surfaces. We describe a minimal hydrodynamic theory that is fully parameterized by experimental measurements of microscopic interactions among embryos. Using this theory, we can quantitatively describe the stability, formation and rotation of crystals and rationalize the emergence of mechanical properties that carry signatures of an odd elastic material. Our work thereby quantitatively connects developmental symmetry breaking events on the single-embryo level with remarkable macroscopic material properties of a novel living chiral crystal system.
Flexible codes and loci of visual working memory
Neural correlates of visual working memory have been found in early visual, parietal, and prefrontal regions. These findings have spurred fruitful debate over how and where in the brain memories might be represented. Here, I will present data from multiple experiments to demonstrate how a focus on behavioral requirements can unveil a more comprehensive understanding of the visual working memory system. Specifically, items in working memory must be maintained in a highly robust manner, resilient to interference. At the same time, storage mechanisms must preserve a high degree of flexibility in case of changing behavioral goals. Several examples will be explored in which visual memory representations are shown to undergo transformations, and even shift their cortical locus alongside their coding format based on specifics of the task.
Extrinsic control and intrinsic computation in the hippocampal CA1 network
A key issue in understanding circuit operations is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Several studies have lesioned or silenced inputs to area CA1 of the hippocampus - either area CA3 or the entorhinal cortex and examined the effect on CA1 pyramidal cells. However, the types of the reported physiological impairments vary widely, primarily because simultaneous manipulations of these redundant inputs have never been performed. In this study, I combined optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of CA3. I combined this with high spatial resolution extracellular recordings along the CA1-dentate axis. Silencing the medial entorhinal largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields. In contrast to these results, unilateral mEC manipulations that were ineffective in impacting place cells during awake behavior were found to alter sharp-wave ripple sequences activated during sleep. Thus, intrinsic excitatory-inhibitory circuits within CA1 can generate neuronal assemblies in the absence of external inputs, although external synaptic inputs are critical to reconfigure (remap) neuronal assemblies in a brain-state dependent manner.
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Reprogramming the nociceptive circuit topology reshapes sexual behavior in C. elegans
In sexually reproducing species, males and females respond to environmental sensory cues and transform the input into sexually dimorphic traits. Yet, how sexually dimorphic behavior is encoded in the nervous system is poorly understood. We characterize the sexually dimorphic nociceptive behavior in C. elegans – hermaphrodites present a lower pain threshold than males in response to aversive stimuli, and study the underlying neuronal circuits, which are composed of the same neurons that are wired differently. By imaging receptor expression, calcium responses and glutamate secretion, we show that sensory transduction is similar in the two sexes, and therefore explore how downstream network topology shapes dimorphic behavior. We generated a computational model that replicates the observed dimorphic behavior, and used this model to predict simple network rewirings that would switch the behavior between the sexes. We then showed experimentally, using genetic manipulations, artificial gap junctions, automated tracking and optogenetics, that these subtle changes to male connectivity result in hermaphrodite-like aversive behavior in-vivo, while hermaphrodite behavior was more robust to perturbations. Strikingly, when presented with aversive cues, rewired males were compromised in finding mating partners, suggesting that the network topology that enables efficient avoidance of noxious cues would have a reproductive "cost". To summarize, we present a deconstruction of a sex-shared neural circuit that affects sexual behavior, and how to reprogram it. More broadly, our results are an example of how common neuronal circuits changed their function during evolution by subtle topological rewirings to account for different environmental and sexual needs.
How communication networks promote cross-cultural similarities: The case of category formation
Individuals vary widely in how they categorize novel phenomena. This individual variation has led canonical theories in cognitive and social science to suggest that communication in large social networks leads populations to construct divergent category systems. Yet, anthropological data indicates that large, independent societies consistently arrive at similar categories across a range of topics. How is it possible for diverse populations, consisting of individuals with significant variation in how they view the world, to independently construct similar categories? Through a series of online experiments, I show how large communication networks within cultures can promote the formation of similar categories across cultures. For this investigation, I designed an online “Grouping Game” to observe how people construct categories in both small and large populations when tasked with grouping together the same novel and ambiguous images. I replicated this design for English-speaking subjects in the U.S. and Mandarin-speaking subjects in China. In both cultures, solitary individuals and small social groups produced highly divergent category systems. Yet, large social groups separately and consistently arrived at highly similar categories both within and across cultures. These findings are accurately predicted by a simple mathematical model of critical mass dynamics. Altogether, I show how large communication networks can filter lexical diversity among individuals to produce replicable society-level patterns, yielding unexpected implications for cultural evolution. In particular, I discuss how participants in both cultures readily harnessed analogies when categorizing novel stimuli, and I examine the role of communication networks in promoting cross-cultural similarities in analogy-making as the key engine of category formation.
The evolution of computation in the brain: Insights from studying the retina
The retina is probably the most accessible part of the vertebrate central nervous system. Its computational logic can be interrogated in a dish, from patterns of lights as the natural input, to spike trains on the optic nerve as the natural output. Consequently, retinal circuits include some of the best understood computational networks in neuroscience. The retina is also ancient, and central to the emergence of neurally complex life on our planet. Alongside new locomotor strategies, the parallel evolution of image forming vision in vertebrate and invertebrate lineages is thought to have driven speciation during the Cambrian. This early investment in sophisticated vision is evident in the fossil record and from comparing the retina’s structural make up in extant species. Animals as diverse as eagles and lampreys share the same retinal make up of five classes of neurons, arranged into three nuclear layers flanking two synaptic layers. Some retina neuron types can be linked across the entire vertebrate tree of life. And yet, the functions that homologous neurons serve in different species, and the circuits that they innervate to do so, are often distinct to acknowledge the vast differences in species-specific visuo-behavioural demands. In the lab, we aim to leverage the vertebrate retina as a discovery platform for understanding the evolution of computation in the nervous system. Working on zebrafish alongside birds, frogs and sharks, we ask: How do synapses, neurons and networks enable ‘function’, and how can they rearrange to meet new sensory and behavioural demands on evolutionary timescales?
Adaptive neural network classifier for decoding finger movements
While non-invasive Brain-to-Computer interface can accurately classify the lateralization of hand moments, the distinction of fingers activation in the same hand is limited by their local and overlapping representation in the motor cortex. In particular, the low signal-to-noise ratio restrains the opportunity to identify meaningful patterns in a supervised fashion. Here we combined Magnetoencephalography (MEG) recordings with advanced decoding strategy to classify finger movements at single trial level. We recorded eight subjects performing a serial reaction time task, where they pressed four buttons with left and right index and middle fingers. We evaluated the classification performance of hand and finger movements with increasingly complex approaches: supervised common spatial patterns and logistic regression (CSP + LR) and unsupervised linear finite convolutional neural network (LF-CNN). The right vs left fingers classification performance was accurate above 90% for all methods. However, the classification of the single finger provided the following accuracy: CSP+SVM : – 68 ± 7%, LF-CNN : 71 ± 10%. CNN methods allowed the inspection of spatial and spectral patterns, which reflected activity in the motor cortex in the theta and alpha ranges. Thus, we have shown that the use of CNN in decoding MEG single trials with low signal to noise ratio is a promising approach that, in turn, could be extended to a manifold of problems in clinical and cognitive neuroscience.
Systemic regulation and measurement of mammalian aging
Brain aging leads to cognitive decline and is the main risk factor for sporadic forms of neurodegenerative diseases including Alzheimer’s disease. While brain cell- and tissue-intrinsic factors are likely key determinants of the aging process recent studies document a remarkable susceptibility of the brain to circulatory factors. Thus, blood borne factors from young mice or humans are sufficient to slow aspects of brain aging and improve cognitive function in old mice and, vice versa, factors from old mice are detrimental for young mice and impair cognition. We found evidence that the cerebrovasculature is an important target of circulatory factors and that brain endothelial cells show prominent age-related transcriptional changes in response to plasma. Furthermore, plasma proteins are taken up broadly into the young brain through receptor mediated transport which declines with aging. At the same time, brain derived proteins are detectable in plasma allowing us to measure physiological changes linked to brain aging in plasma. We are exploring the relevance of these findings for neurodegeneration and potential applications towards therapies.
Unchanging and changing: hardwired taste circuits and their top-down control
The taste system detects 5 major categories of ethologically relevant stimuli (sweet, bitter, umami, sour and salt) and accordingly elicits acceptance or avoidance responses. While these taste responses are innate, the taste system retains a remarkable flexibility in response to changing external and internal contexts. Taste chemicals are first recognized by dedicated taste receptor cells (TRCs) and then transmitted to the cortex via a multi-station relay. I reasoned that if I could identify taste neural substrates along this pathway, it would provide an entry to decipher how taste signals are encoded to drive innate response and modulated to facilitate adaptive response. Given the innate nature of taste responses, these neural substrates should be genetically identifiable. I therefore exploited single-cell RNA sequencing to isolate molecular markers defining taste qualities in the taste ganglion and the nucleus of the solitary tract (NST) in the brainstem, the two stations transmitting taste signals from TRCs to the brain. How taste information propagates from the ganglion to the brain is highly debated (i.e., does taste information travel in labeled-lines?). Leveraging these genetic handles, I demonstrated one-to-one correspondence between ganglion and NST neurons coding for the same taste. Importantly, inactivating one ‘line’ did not affect responses to any other taste stimuli. These results clearly showed that taste information is transmitted to the brain via labeled lines. But are these labeled lines aptly adapted to the internal state and external environment? I studied the modulation of taste signals by conflicting taste qualities in the concurrence of sweet and bitter to understand how adaptive taste responses emerge from hardwired taste circuits. Using functional imaging, anatomical tracing and circuit mapping, I found that bitter signals suppress sweet signals in the NST via top-down modulation by taste cortex and amygdala of NST taste signals. While the bitter cortical field provides direct feedback onto the NST to amplify incoming bitter signals, it exerts negative feedback via amygdala onto the incoming sweet signal in the NST. By manipulating this feedback circuit, I showed that this top-down control is functionally required for bitter evoked suppression of sweet taste. These results illustrate how the taste system uses dedicated feedback lines to finely regulate innate behavioral responses and may have implications for the context-dependent modulation of hardwired circuits in general.
Exploring mechanisms of human brain expansion in cerebral organoids
The human brain sets us apart as a species, with its size being one of its most striking features. Brain size is largely determined during development as vast numbers of neurons and supportive glia are generated. In an effort to better understand the events that determine the human brain’s cellular makeup, and its size, we use a human model system in a dish, called cerebral organoids. These 3D tissues are generated from pluripotent stem cells through neural differentiation and a supportive 3D microenvironment to generate organoids with the same tissue architecture as the early human fetal brain. Such organoids are allowing us to tackle questions previously impossible with more traditional approaches. Indeed, our recent findings provide insight into regulation of brain size and neuron number across ape species, identifying key stages of early neural stem cell expansion that set up a larger starting cell number to enable the production of increased numbers of neurons. We are also investigating the role of extrinsic regulators in determining numbers and types of neurons produced in the human cerebral cortex. Overall, our findings are pointing to key, human-specific aspects of brain development and function, that have important implications for neurological disease.
Elucidating the mechanism underlying Stress and Caffeine-induced motor dysfunction using a mouse model of Episodic Ataxia Type 2
Episodic Ataxia type 2 (EA2), caused by mutations in the CACNA1A gene, results in a loss-of-function of the P/Q type calcium channel, which leads to baseline ataxia, and attacks of dyskinesia, that can last a few hours to a few days. Attacks are brought on by consumption of caffeine, alcohol, and physical or emotional stress. Interestingly, caffeine and stress are common triggers among other episodic channelopathies, as well as causing tremor or shaking in otherwise healthy adults. The mechanism underlying stress and caffeine induced motor impairment remains poorly understood. Utilizing behavior, and in vivo and in vitro electrophysiology in the tottering mouse, a well characterized mouse model of EA2, or WT mice, we first sought to elucidate the mechanism underlying stress-induced motor impairment. We found stress induces attacks in EA2 though the activation of cerebellar alpha 1 adrenergic receptors by norepinephrine (NE) through casein kinase 2 (CK2) dependent phosphorylation. This decreases SK2 channel activity, causing increased Purkinje cell irregularity and motor impairment. Knocking down or blocking CK2 with an FDA approved drug CX-4945 prevented PC irregularity and stress-induced attacks. We next hypothesized caffeine, which has been shown to increase NE levels, could induce attacks through the same alpha 1 adrenergic mechanism in EA2. We found caffeine increases PC irregularity and induces attacks through the same CK2 pathway. Block of alpha 1 adrenergic receptors, however, failed to prevent caffeine-induced attacks. Caffeine instead induces attacks through the block of cerebellar A1 adenosine receptors. This increases the release of glutamate, which interacts with mGluR1 receptors on PC, resulting in erratic firing and motor attacks. Finally, we show a novel direct interaction between mGluR1 and CK2, and inhibition of mGluR1 prior to initiation of attack, prevents the caffeine-induced increase in phosphorylation. These data elucidate the mechanism underlying stress and caffeine-induced motor impairment. Furthermore, given the success of CX-4945 to prevent stress and caffeine induced attacks, it establishes ground-work for the development of therapeutics for the treatment of caffeine and stress induced attacks in EA2 patients and possibly other episodic channelopathies.
Extrinsic control and autonomous computation in the hippocampal CA1 circuit
In understanding circuit operations, a key issue is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Because pyramidal cells in CA1 do not have local recurrent projections, it is currently assumed that firing in CA1 is inherited from its inputs – thus, entorhinal inputs provide communication with the rest of the neocortex and the outside world, whereas CA3 inputs provide internal and past memory representations. Several studies have attempted to prove this hypothesis, by lesioning or silencing either area CA3 or the entorhinal cortex and examining the effect of firing on CA1 pyramidal cells. Despite the intense and careful work in this research area, the magnitudes and types of the reported physiological impairments vary widely across experiments. At least part of the existing variability and conflicts is due to the different behavioral paradigms, designs and evaluation methods used by different investigators. Simultaneous manipulations in the same animal or even separate manipulations of the different inputs to the hippocampal circuits in the same experiment are rare. To address these issues, I used optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of the entire CA3 region. I combined this with high spatial resolution recording of local field potentials (LFP) in the CA1-dentate axis and simultaneously collected firing pattern data from thousands of single neurons. Each experimental animal had up to two of these manipulations being performed simultaneously. Silencing the medial entorhinal (mEC) largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields, and reliable assembly expression as in the intact mouse. Thus, the CA1 network can maintain autonomous computation to support coordinated place cell assemblies without reliance on its inputs, yet these inputs can effectively reconfigure and assist in maintaining stability of the CA1 map.
The balance of excitation and inhibition and a canonical cortical computation
Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. The balanced network model provides a self-consistent account of this observation: population rates dynamically adjust to yield a state in which all neurons are active at biological levels, with their E & I inputs tightly balanced. But global tight E/I balance predicts population responses with linear stimulus-dependence and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which only a localized subset of neurons are active and have balanced inputs. We analytically show that in networks of neurons with different stimulus selectivities, the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winner-take-all behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), which predicts a regime of loose, rather than tight, E/I balance. However, an understanding of the causal relationship between E/I balance and normalization in SSN and conditions under which SSN yields significant sublinear integration are lacking. For weak inputs, SSN integrates inputs supralinearly, while for very strong inputs it approaches a regime of tight balance. We show that when this latter regime is globally balanced, SSN cannot exhibit strong normalization for any input strength; thus, in SSN too, significant normalization requires localized balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation. Time allowing I will also cover our work extending a normative theoretical account of normalization which explains it as an example of efficient coding of natural stimuli. We show that when biological noise is accounted for, this theory makes the same prediction as the SSN: a transition to supralinear integration for weak stimuli.
Cognitive experience alters cortical involvement in navigation decisions
The neural correlates of decision-making have been investigated extensively, and recent work aims to identify under what conditions cortex is actually necessary for making accurate decisions. We discovered that mice with distinct cognitive experiences, beyond sensory and motor learning, use different cortical areas and neural activity patterns to solve the same task, revealing past learning as a critical determinant of whether cortex is necessary for decision tasks. We used optogenetics and calcium imaging to study the necessity and neural activity of multiple cortical areas in mice with different training histories. Posterior parietal cortex and retrosplenial cortex were mostly dispensable for accurate performance of a simple navigation-based visual discrimination task. In contrast, these areas were essential for the same simple task when mice were previously trained on complex tasks with delay periods or association switches. Multi-area calcium imaging showed that, in mice with complex-task experience, single-neuron activity had higher selectivity and neuron-neuron correlations were weaker, leading to codes with higher task information. Therefore, past experience is a key factor in determining whether cortical areas have a causal role in decision tasks.
Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems
Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.
The functional connectome across temporal scales
The view of human brain function has drastically shifted over the last decade, owing to the observation that the majority of brain activity is intrinsic rather than driven by external stimuli or cognitive demands. Specifically, all brain regions continuously communicate in spatiotemporally organized patterns that constitute the functional connectome, with consequences for cognition and behavior. In this talk, I will argue that another shift is underway, driven by new insights from synergistic interrogation of the functional connectome using different acquisition methods. The human functional connectome is typically investigated with functional magnetic resonance imaging (fMRI) that relies on the indirect hemodynamic signal, thereby emphasizing very slow connectivity across brain regions. Conversely, more recent methodological advances demonstrate that fast connectivity within the whole-brain connectome can be studied with real-time methods such as electroencephalography (EEG). Our findings show that combining fMRI with scalp or intracranial EEG in humans, especially when recorded concurrently, paints a rich picture of neural communication across the connectome. Specifically, the connectome comprises both fast, oscillation-based connectivity observable with EEG, as well as extremely slow processes best captured by fMRI. While the fast and slow processes share an important degree of spatial organization, these processes unfold in a temporally independent manner. Our observations suggest that fMRI and EEG may be envisaged as capturing distinct aspects of functional connectivity, rather than intermodal measurements of the same phenomenon. Infraslow fluctuation-based and rapid oscillation-based connectivity of various frequency bands constitute multiple dynamic trajectories through a shared state space of discrete connectome configurations. The multitude of flexible trajectories may concurrently enable functional connectivity across multiple independent sets of distributed brain regions.
Do Capuchin Monkeys, Chimpanzees and Children form Overhypotheses from Minimal Input? A Hierarchical Bayesian Modelling Approach
Abstract concepts are a powerful tool to store information efficiently and to make wide-ranging predictions in new situations based on sparse data. Whereas looking-time studies point towards an early emergence of this ability in human infancy, other paradigms like the relational match to sample task often show a failure to detect abstract concepts like same and different until the late preschool years. Similarly, non-human animals have difficulties solving those tasks and often succeed only after long training regimes. Given the huge influence of small task modifications, there is an ongoing debate about the conclusiveness of these findings for the development and phylogenetic distribution of abstract reasoning abilities. Here, we applied the concept of “overhypotheses” which is well known in the infant and cognitive modeling literature to study the capabilities of 3 to 5-year-old children, chimpanzees, and capuchin monkeys in a unified and more ecologically valid task design. In a series of studies, participants themselves sampled reward items from multiple containers or witnessed the sampling process. Only when they detected the abstract pattern governing the reward distributions within and across containers, they could optimally guide their behavior and maximize the reward outcome in a novel test situation. We compared each species’ performance to the predictions of a probabilistic hierarchical Bayesian model capable of forming overhypotheses at a first and second level of abstraction and adapted to their species-specific reward preferences.
Association between a previously remembered context and an aversive experience is accompanied by repeated activations of the same context engram neurons throughout the brain
FENS Forum 2024
Dissection of a neuronal integrator circuit through correlated light and electron microscopy in larval zebrafish. Part 1: Functional imaging and ultrastructure in the same animal
FENS Forum 2024
Inhibitory synapses on spinal motoneurons express VAMP1 and VAMP2 and both are reduced by tetanus toxin while sparing these same VAMPs in adjacent excitatory synapses
FENS Forum 2024