correlation
Latest
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Beyond Homogeneity: Characterizing Brain Disorder Heterogeneity through EEG and Normative Modeling
Electroencephalography (EEG) has been thoroughly studied for decades in psychiatry research. Yet its integration into clinical practice as a diagnostic/prognostic tool remains unachieved. We hypothesize that a key reason is the underlying patient's heterogeneity, overlooked in psychiatric EEG research relying on a case-control approach. We combine HD-EEG with normative modeling to quantify this heterogeneity using two well-established and extensively investigated EEG characteristics -spectral power and functional connectivity- across a cohort of 1674 patients with attention-deficit/hyperactivity disorder, autism spectrum disorder, learning disorder, or anxiety, and 560 matched controls. Normative models showed that deviations from population norms among patients were highly heterogeneous and frequency-dependent. Deviation spatial overlap across patients did not exceed 40% and 24% for spectral and connectivity, respectively. Considering individual deviations in patients has significantly enhanced comparative analysis, and the identification of patient-specific markers has demonstrated a correlation with clinical assessments, representing a crucial step towards attaining precision psychiatry through EEG.
Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine
Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.
Movements and engagement during decision-making
When experts are immersed in a task, a natural assumption is that their brains prioritize task-related activity. Accordingly, most efforts to understand neural activity during well-learned tasks focus on cognitive computations and task-related movements. Surprisingly, we observed that during decision-making, the cortex-wide activity of multiple cell types is dominated by movements, especially “uninstructed movements”, that are spontaneously expressed. These observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity. To understand the relationship between these movements and decision-making, we examined the movements more closely. We tested whether the magnitude or the timing of the movements was correlated with decision-making performance. To do this, we partitioned movements into two groups: task-aligned movements that were well predicted by task events (such as the onset of the sensory stimulus or choice) and task independent movement (TIM) that occurred independently of task events. TIM had a reliable, inverse correlation with performance in head-restrained mice and freely moving rats. This hinted that the timing of spontaneous movements could indicate periods of disengagement. To confirm this, we compared TIM to the latent behavioral states recovered by a hidden Markov model with Bernoulli generalized linear model observations (GLM-HMM) and found these, again, to be inversely correlated. Finally, we examined the impact of these behavioral states on neural activity. Surprisingly, we found that the same movement impacts neural activity more strongly when animals are disengaged. An intriguing possibility is that these larger movement signals disrupt cognitive computations, leading to poor decision-making performance. Taken together, these observations argue that movements and cognitionare closely intertwined, even during expert decision-making.
Use of brain imaging data to improve prescriptions of psychotropic drugs - Examples of ketamine in depression and antipsychotics in schizophrenia
The use of molecular imaging, particularly PET and SPECT, has significantly transformed the treatment of schizophrenia with antipsychotic drugs since the late 1980s. It has offered insights into the links between drug target engagement, clinical effects, and side effects. A therapeutic window for receptor occupancy is established for antipsychotics, yet there is a divergence of opinions regarding the importance of blood levels, with many downplaying their significance. As a result, the role of therapeutic drug monitoring (TDM) as a personalized therapy tool is often underrated. Since molecular imaging of antipsychotics has focused almost entirely on D2-like dopamine receptors and their potential to control positive symptoms, negative symptoms and cognitive deficits are hardly or not at all investigated. Alternative methods have been introduced, i.e. to investigate the correlation between approximated receptor occupancies from blood levels and cognitive measures. Within the domain of antidepressants, and specifically regarding ketamine's efficacy in depression treatment, there is limited comprehension of the association between plasma concentrations and target engagement. The measurement of AMPA receptors in the human brain has added a new level of comprehension regarding ketamine's antidepressant effects. To ensure precise prescription of psychotropic drugs, it is vital to have a nuanced understanding of how molecular and clinical effects interact. Clinician scientists are assigned with the task of integrating these indispensable pharmacological insights into practice, thereby ensuring a rational and effective approach to the treatment of mental health disorders, signaling a new era of personalized drug therapy mechanisms that promote neuronal plasticity not only under pathological conditions, but also in the healthy aging brain.
In vivo direct imaging of neuronal activity at high temporospatial resolution
Advanced noninvasive neuroimaging methods provide valuable information on the brain function, but they have obvious pros and cons in terms of temporal and spatial resolution. Functional magnetic resonance imaging (fMRI) using blood-oxygenation-level-dependent (BOLD) effect provides good spatial resolution in the order of millimeters, but has a poor temporal resolution in the order of seconds due to slow hemodynamic responses to neuronal activation, providing indirect information on neuronal activity. In contrast, electroencephalography (EEG) and magnetoencephalography (MEG) provide excellent temporal resolution in the millisecond range, but spatial information is limited to centimeter scales. Therefore, there has been a longstanding demand for noninvasive brain imaging methods capable of detecting neuronal activity at both high temporal and spatial resolution. In this talk, I will introduce a novel approach that enables Direct Imaging of Neuronal Activity (DIANA) using MRI that can dynamically image neuronal spiking activity in milliseconds precision, achieved by data acquisition scheme of rapid 2D line scan synchronized with periodically applied functional stimuli. DIANA was demonstrated through in vivo mouse brain imaging on a 9.4T animal scanner during electrical whisker-pad stimulation. DIANA with milliseconds temporal resolution had high correlations with neuronal spike activities, which could also be applied in capturing the sequential propagation of neuronal activity along the thalamocortical pathway of brain networks. In terms of the contrast mechanism, DIANA was almost unaffected by hemodynamic responses, but was subject to changes in membrane potential-associated tissue relaxation times such as T2 relaxation time. DIANA is expected to break new ground in brain science by providing an in-depth understanding of the hierarchical functional organization of the brain, including the spatiotemporal dynamics of neural networks.
Distinct contributions of different anterior frontal regions to rule-guided decision-making in primates: complementary evidence from lesions, electrophysiology, and neurostimulation
Different prefrontal areas contribute in distinctly different ways to rule-guided behaviour in the context of a Wisconsin Card Sorting Test (WCST) analog for macaques. For example, causal evidence from circumscribed lesions in NHPs reveals that dorsolateral prefrontal cortex (dlPFC) is necessary to maintain a reinforced abstract rule in working memory, orbitofrontal cortex (OFC) is needed to rapidly update representations of rule value, and the anterior cingulate cortex (ACC) plays a key role in cognitive control and integrating information for correct and incorrect trials over recent outcomes. Moreover, recent lesion studies of frontopolar cortex (FPC) suggest it contributes to representing the relative value of unchosen alternatives, including rules. Yet we do not understand how these functional specializations relate to intrinsic neuronal activities nor the extent to which these neuronal activities differ between different prefrontal regions. After reviewing the aforementioned causal evidence I will present our new data from studies using multi-area multi-electrode recording techniques in NHPs to simultaneously record from four different prefrontal regions implicated in rule-guided behaviour. Multi-electrode micro-arrays (‘Utah arrays’) were chronically implanted in dlPFC, vlPFC, OFC, and FPC of two macaques, allowing us to simultaneously record single and multiunit activity, and local field potential (LFP), from all regions while the monkey performs the WCST analog. Rule-related neuronal activity was widespread in all areas recorded but it differed in degree and in timing between different areas. I will also present preliminary results from decoding analyses applied to rule-related neuronal activities both from individual clusters and also from population measures. These results confirm and help quantify dynamic task-related activities that differ between prefrontal regions. We also found task-related modulation of LFPs within beta and gamma bands in FPC. By combining this correlational recording methods with trial-specific causal interventions (electrical microstimulation) to FPC we could significantly enhance and impair animals performance in distinct task epochs in functionally relevant ways, further consistent with an emerging picture of regional functional specialization within a distributed framework of interacting and interconnected cortical regions.
Developmentally structured coactivity in the hippocampal trisynaptic loop
The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.
The strongly recurrent regime of cortical networks
Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Orientation selectivity in rodent V1: theory vs experiments
Neurons in the primary visual cortex (V1) of rodents are selective to the orientation of the stimulus, as in other mammals such as cats and monkeys. However, in contrast with those species, their neurons display a very different type of spatial organization. Instead of orientation maps they are organized in a “salt and pepper” pattern, where adjacent neurons have completely different preferred orientations. This structure has motivated both experimental and theoretical research with the objective of determining which aspects of the connectivity patterns and intrinsic neuronal responses can explain the observed behavior. These analysis have to take into account also that the neurons of the thalamus that send their outputs to the cortex have more complex responses in rodents than in higher mammals, displaying, for instance, a significant degree of orientation selectivity. In this talk we present work showing that a random feed-forward connectivity pattern, in which the probability of having a connection between a cortical neuron and a thalamic neuron depends only on the relative distance between them is enough explain several aspects of the complex phenomenology found in these systems. Moreover, this approach allows us to evaluate analytically the statistical structure of the thalamic input on the cortex. We find that V1 neurons are orientation selective but the preferred orientation of the stimulus depends on the spatial frequency of the stimulus. We disentangle the effect of the non circular thalamic receptive fields, finding that they control the selectivity of the time-averaged thalamic input, but not the selectivity of the time locked component. We also compare with experiments that use reverse correlation techniques, showing that ON and OFF components of the aggregate thalamic input are spatially segregated in the cortex.
Hippocampal network dynamics during impaired working memory in epileptic mice
Memory impairment is a common cognitive deficit in temporal lobe epilepsy (TLE). The hippocampus is severely altered in TLE exhibiting multiple anatomical changes that lead to a hyperexcitable network capable of generating frequent epileptic discharges and seizures. In this study we investigated whether hippocampal involvement in epileptic activity drives working memory deficits using bilateral LFP recordings from CA1 during task performance. We discovered that epileptic mice experienced focal rhythmic discharges (FRDs) while they performed the spatial working memory task. Spatial correlation analysis revealed that FRDs were often spatially stable on the maze and were most common around reward zones (25 ‰) and delay zones (50 ‰). Memory performance was correlated with stability of FRDs, suggesting that spatially unstable FRDs interfere with working memory codes in real time.
Mechanisms of relational structure mapping across analogy tasks
Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.
The medial prefrontal cortex replays generalized sequences
Whilst spatial navigation is a function ascribed to the hippocampus, flexibly adapting to a change in rule depends on the medial prefrontal cortex (mPFC). Single-units were recorded from the hippocampus and mPFC of rats shifting between a spatially- and cue-guided rule on a plus-maze. The mPFC population coded for the relative position between start and goal arm. During awake immobility periods, the mPFC replayed organized sequences of generalized positions which positively correlated with rule-switching performance. Conversely, hippocampal replay negatively correlated with performance and occurred independently of mPFC replay. Sequential replay in the hippocampus and mPFC may thus serve different functions.
Neural networks in the replica-mean field limits
In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.
Network inference via process motifs for lagged correlation in linear stochastic processes
A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.
Signal in the Noise: models of inter-trial and inter-subject neural variability
The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).
A parsimonious description of global functional brain organization in three spatiotemporal patterns
Resting-state functional magnetic resonance imaging (MRI) has yielded seemingly disparate insights into large-scale organization of the human brain. The brain’s large-scale organization can be divided into two broad categories: zero-lag representations of functional connectivity structure and time-lag representations of traveling wave or propagation structure. In this study, we sought to unify observed phenomena across these two categories in the form of three low-frequency spatiotemporal patterns composed of a mixture of standing and traveling wave dynamics. We showed that a range of empirical phenomena, including functional connectivity gradients, the task-positive/task-negative anti-correlation pattern, the global signal, time-lag propagation patterns, the quasiperiodic pattern and the functional connectome network structure, are manifestations of these three spatiotemporal patterns. These patterns account for much of the global spatial structure that underlies functional connectivity analyses and unifies phenomena in resting-state functional MRI previously thought distinct.
An investigation of perceptual biases in spiking recurrent neural networks trained to discriminate time intervals
Magnitude estimation and stimulus discrimination tasks are affected by perceptual biases that cause the stimulus parameter to be perceived as shifted toward the mean of its distribution. These biases have been extensively studied in psychophysics and, more recently and to a lesser extent, with neural activity recordings. New computational techniques allow us to train spiking recurrent neural networks on the tasks used in the experiments. This provides us with another valuable tool with which to investigate the network mechanisms responsible for the biases and how behavior could be modeled. As an example, in this talk I will consider networks trained to discriminate the durations of temporal intervals. The trained networks presented the contraction bias, even though they were trained with a stimulus sequence without temporal correlations. The neural activity during the delay period carried information about the stimuli of the current trial and previous trials, this being one of the mechanisms that originated the contraction bias. The population activity described trajectories in a low-dimensional space and their relative locations depended on the prior distribution. The results can be modeled as an ideal observer that during the delay period sees a combination of the current and the previous stimuli. Finally, I will describe how the neural trajectories in state space encode an estimate of the interval duration. The approach could be applied to other cognitive tasks.
Computation in the neuronal systems close to the critical point
It was long hypothesized that natural systems might take advantage of the extended temporal and spatial correlations close to the critical point to improve their computational capabilities. However, on the other side, different distances to criticality were inferred from the recordings of nervous systems. In my talk, I discuss how including additional constraints on the processing time can shift the optimal operating point of the recurrent networks. Moreover, the data from the visual cortex of the monkeys during the attentional task indicate that they flexibly change the closeness to the critical point of the local activity. Overall it suggests that, as we would expect from common sense, the optimal state depends on the task at hand, and the brain adapts to it in a local and fast manner.
A transcriptomic axis predicts state modulation of cortical interneurons
Transcriptomics has revealed that cortical inhibitory neurons exhibit a great diversity of fine molecular subtypes, but it is not known whether these subtypes have correspondingly diverse activity patterns in the living brain. We show that inhibitory subtypes in primary visual cortex (V1) have diverse correlates with brain state, but that this diversity is organized by a single factor: position along their main axis of transcriptomic variation. We combined in vivo 2-photon calcium imaging of mouse V1 with a novel transcriptomic method to identify mRNAs for 72 selected genes in ex vivo slices. We classified inhibitory neurons imaged in layers 1-3 into a three-level hierarchy of 5 Subclasses, 11 Types, and 35 Subtypes using previously-defined transcriptomic clusters. Responses to visual stimuli differed significantly only across Subclasses, suppressing cells in the Sncg Subclass while driving cells in the other Subclasses. Modulation by brain state differed at all hierarchical levels but could be largely predicted from the first transcriptomic principal component, which also predicted correlations with simultaneously recorded cells. Inhibitory Subtypes that fired more in resting, oscillatory brain states have less axon in layer 1, narrower spikes, lower input resistance and weaker adaptation as determined in vitro and express more inhibitory cholinergic receptors. Subtypes firing more during arousal had the opposite properties. Thus, a simple principle may largely explain how diverse inhibitory V1 Subtypes shape state-dependent cortical processing.
Cognitive experience alters cortical involvement in navigation decisions
The neural correlates of decision-making have been investigated extensively, and recent work aims to identify under what conditions cortex is actually necessary for making accurate decisions. We discovered that mice with distinct cognitive experiences, beyond sensory and motor learning, use different cortical areas and neural activity patterns to solve the same task, revealing past learning as a critical determinant of whether cortex is necessary for decision tasks. We used optogenetics and calcium imaging to study the necessity and neural activity of multiple cortical areas in mice with different training histories. Posterior parietal cortex and retrosplenial cortex were mostly dispensable for accurate performance of a simple navigation-based visual discrimination task. In contrast, these areas were essential for the same simple task when mice were previously trained on complex tasks with delay periods or association switches. Multi-area calcium imaging showed that, in mice with complex-task experience, single-neuron activity had higher selectivity and neuron-neuron correlations were weaker, leading to codes with higher task information. Therefore, past experience is a key factor in determining whether cortical areas have a causal role in decision tasks.
Mapping the Dynamics of the Linear and 3D Genome of Single Cells in the Developing Brain
Three intimately related dimensions of the mammalian genome—linear DNA sequence, gene transcription, and 3D genome architecture—are crucial for the development of nervous systems. Changes in the linear genome (e.g., de novo mutations), transcriptome, and 3D genome structure lead to debilitating neurodevelopmental disorders, such as autism and schizophrenia. However, current technologies and data are severely limited: (1) 3D genome structures of single brain cells have not been solved; (2) little is known about the dynamics of single-cell transcriptome and 3D genome after birth; (3) true de novo mutations are extremely difficult to distinguish from false positives (DNA damage and/or amplification errors). Here, I filled in this longstanding technological and knowledge gap. I recently developed a high-resolution method—diploid chromatin conformation capture (Dip-C)—which resolved the first 3D structure of the human genome, tackling a longstanding problem dating back to the 1880s. Using Dip-C, I obtained the first 3D genome structure of a single brain cell, and created the first transcriptome and 3D genome atlas of the mouse brain during postnatal development. I found that in adults, 3D genome “structure types” delineate all major cell types, with high correlation between chromatin A/B compartments and gene expression. During development, both transcriptome and 3D genome are extensively transformed in the first month of life. In neurons, 3D genome is rewired across scales, correlated with gene expression modules, and independent of sensory experience. Finally, I examined allele-specific structure of imprinted genes, revealing local and chromosome-wide differences. More recently, I expanded my 3D genome atlas to the human and mouse cerebellum—the most consistently affected brain region in autism. I uncovered unique 3D genome rewiring throughout life, providing a structural basis for the cerebellum’s unique mode of development and aging. In addition, to accurately measure de novo mutations in a single cell, I developed a new method—multiplex end-tagging amplification of complementary strands (META-CS), which eliminates nearly all false positives by virtue of DNA complementarity. Using META-CS, I determined the true mutation spectrum of single human brain cells, free from chemical artifacts. Together, my findings uncovered an unknown dimension of neurodevelopment, and open up opportunities for new treatments for autism and other developmental disorders.
Taming chaos in neural circuits
Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.
Network mechanisms underlying representational drift in area CA1 of hippocampus
Recent chronic imaging experiments in mice have revealed that the hippocampal code exhibits non-trivial turnover dynamics over long time scales. Specifically, the subset of cells which are active on any given session in a familiar environment changes over the course of days and weeks. While some cells transition into or out of the code after a few sessions, others are stable over the entire experiment. The mechanisms underlying this turnover are unknown. Here we show that the statistics of turnover are consistent with a model in which non-spatial inputs to CA1 pyramidal cells readily undergo plasticity, while spatially tuned inputs are largely stable over time. The heterogeneity in stability across the cell assembly, as well as the decrease in correlation of the population vector of activity over time, are both quantitatively fit by a simple model with Gaussian input statistics. In fact, such input statistics emerge naturally in a network of spiking neurons operating in the fluctuation-driven regime. This correspondence allows one to map the parameters of a large-scale spiking network model of CA1 onto the simple statistical model, and thereby fit the experimental data quantitatively. Importantly, we show that the observed drift is entirely consistent with random, ongoing synaptic turnover. This synaptic turnover is, in turn, consistent with Hebbian plasticity related to continuous learning in a fast memory system.
Does human perception rely on probabilistic message passing?
The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.
Inferring informational structures in neural recordings of drosophila with epsilon-machines
Measuring the degree of consciousness an organism possesses has remained a longstanding challenge in Neuroscience. In part, this is due to the difficulty of finding the appropriate mathematical tools for describing such a subjective phenomenon. Current methods relate the level of consciousness to the complexity of neural activity, i.e., using the information contained in a stream of recorded signals they can tell whether the subject might be awake, asleep, or anaesthetised. Usually, the signals stemming from a complex system are correlated in time; the behaviour of the future depends on the patterns in the neural activity of the past. However these past-future relationships remain either hidden to, or not taken into account in the current measures of consciousness. These past-future correlations are likely to contain more information and thus can reveal a richer understanding about the behaviour of complex systems like a brain. Our work employs the "epsilon-machines” framework to account for the time correlations in neural recordings. In a nutshell, epsilon-machines reveal how much of the past neural activity is needed in order to accurately predict how the activity in the future will behave, and this is summarised in a single number called "statistical complexity". If a lot of past neural activity is required to predict the future behaviour, then can we say that the brain was more “awake" at the time of recording? Furthermore, if we read the recordings in reverse, does the difference between forward and reverse-time statistical complexity allow us to quantify the level of time asymmetry in the brain? Neuroscience predicts that there should be a degree of time asymmetry in the brain. However, this has never been measured. To test this, we used neural recordings measured from the brains of fruit flies and inferred the epsilon-machines. We found that the nature of the past and future correlations of neural activity in the brain, drastically changes depending on whether the fly was awake or anaesthetised. Not only does our study find that wakeful and anaesthetised fly brains are distinguished by how statistically complex they are, but that the amount of correlations in wakeful fly brains was much more sensitive to whether the neural recordings were read forward vs. backwards in time, compared to anaesthetised brains. In other words, wakeful fly brains were more complex, and time asymmetric than anaesthetised ones.
A nonlinear shot noise model for calcium-based synaptic plasticity
Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.
Individual differences in visual (mis)perception: a multivariate statistical approach
Common factors are omnipresent in everyday life, e.g., it is widely held that there is a common factor g for intelligence. In vision, however, there seems to be a multitude of specific factors rather than a strong and unique common factor. In my thesis, I first examined the multidimensionality of the structure underlying visual illusions. To this aim, the susceptibility to various visual illusions was measured. In addition, subjects were tested with variants of the same illusion, which differed in spatial features, luminance, orientation, or contextual conditions. Only weak correlations were observed between the susceptibility to different visual illusions. An individual showing a strong susceptibility to one visual illusion does not necessarily show a strong susceptibility to other visual illusions, suggesting that the structure underlying visual illusions is multifactorial. In contrast, there were strong correlations between the susceptibility to variants of the same illusion. Hence, factors seem to be illusion-specific but not feature-specific. Second, I investigated whether a strong visual factor emerges in healthy elderly and patients with schizophrenia, which may be expected from the general decline in perceptual abilities usually reported in these two populations compared to healthy young adults. Similarly, a strong visual factor may emerge in action video gamers, who often show enhanced perceptual performance compared to non-video gamers. Hence, healthy elderly, patients with schizophrenia, and action video gamers were tested with a battery of visual tasks, such as a contrast detection and orientation discrimination task. As in control groups, between-task correlations were weak in general, which argues against the emergence of a strong common factor for vision in these populations. While similar tasks are usually assumed to rely on similar neural mechanisms, the performances in different visual tasks were only weakly related to each other, i.e., performance does not generalize across visual tasks. These results highlight the relevance of an individual differences approach to unravel the multidimensionality of the visual structure.
Inhibitory connectivity and computations in olfaction
We use the olfactory system and forebrain of (adult) zebrafish as a model to analyze how relevant information is extracted from sensory inputs, how information is stored in memory circuits, and how sensory inputs inform behavior. A series of recent findings provides evidence that inhibition has not only homeostatic functions in neuronal circuits but makes highly specific, instructive contributions to behaviorally relevant computations in different brain regions. These observations imply that the connectivity among excitatory and inhibitory neurons exhibits essential higher-order structure that cannot be determined without dense network reconstructions. To analyze such connectivity we developed an approach referred to as “dynamical connectomics” that combines 2-photon calcium imaging of neuronal population activity with EM-based dense neuronal circuit reconstruction. In the olfactory bulb, this approach identified specific connectivity among co-tuned cohorts of excitatory and inhibitory neurons that can account for the decorrelation and normalization (“whitening”) of odor representations in this brain region. These results provide a mechanistic explanation for a fundamental neural computation that strictly requires specific network connectivity.
NMC4 Short Talk: Two-Photon Imaging of Norepinephrine in the Prefrontal Cortex Shows that Norepinephrine Structures Cell Firing Through Local Release
Norepinephrine (NE) is a neuromodulator that is released from projections of the locus coeruleus via extra-synaptic vesicle exocytosis. Tonic fluctuations in NE are involved in brain states, such as sleep, arousal, and attention. Previously, NE in the PFC was thought to be a homogenous field created by bulk release, but it remains unknown whether phasic (fast, short-term) fluctuations in NE can produce a spatially heterogeneous field, which could then structure cell firing at a fine spatial scale. To understand how spatiotemporal dynamics of norepinephrine (NE) release in the prefrontal cortex affect neuronal firing, we performed a novel in-vivo two-photon imaging experiment in layer ⅔ of the prefrontal cortex using a green fluorescent NE sensor and a red fluorescent Ca2+ sensor, which allowed us to simultaneously observe fine-scale neuronal and NE dynamics in the form of spatially localized fluorescence time series. Using generalized linear modeling, we found that the local NE field differs from the global NE field in transient periods of decorrelation, which are influenced by proximal NE release events. We used optical flow and pattern analysis to show that release and reuptake events can occur at the same location but at different times, and differential recruitment of release and reuptake sites over time is a potential mechanism for creating a heterogeneous NE field. Our generalized linear models predicting cellular dynamics show that the heterogeneous local NE field, and not the global field, drives cell firing dynamics. These results point to the importance of local, small-scale, phasic NE fluctuations for structuring cell firing. Prior research suggests that these phasic NE fluctuations in the PFC may play a role in attentional shifts, orienting to sensory stimuli in the environment, and in the selective gain of priority representations during stress (Mather, Clewett et al. 2016) (Aston-Jones and Bloom 1981).
NMC4 Short Talk: Novel population of synchronously active pyramidal cells in hippocampal area CA1
Hippocampal pyramidal cells have been widely studied during locomotion, when theta oscillations are present, and during short wave ripples at rest, when replay takes place. However, we find a subset of pyramidal cells that are preferably active during rest, in the absence of theta oscillations and short wave ripples. We recorded these cells using two-photon imaging in dorsal CA1 of the hippocampus of mice, during a virtual reality object location recognition task. During locomotion, the cells show a similar level of activity as control cells, but their activity increases during rest, when this population of cells shows highly synchronous, oscillatory activity at a low frequency (0.1-0.4 Hz). In addition, during both locomotion and rest these cells show place coding, suggesting they may play a role in maintaining a representation of the current location, even when the animal is not moving. We performed simultaneous electrophysiological and calcium recordings, which showed a higher correlation of activity between the LFO and the hippocampal cells in the 0.1-0.4 Hz low frequency band during rest than during locomotion. However, the relationship between the LFO and calcium signals varied between electrodes, suggesting a localized effect. We used the Allen Brain Observatory Neuropixels Visual Coding dataset to further explore this. These data revealed localised low frequency oscillations in CA1 and DG during rest. Overall, we show a novel population of hippocampal cells, and a novel oscillatory band of activity in hippocampus during rest.
Finding needles in the neural haystack: unsupervised analyses of noisy data
In modern neuroscience, we often want to extract information from recordings of many neurons in the brain. Unfortunately, the activity of individual neurons is very noisy, making it difficult to relate to cognition and behavior. Thankfully, we can use the correlations across time and neurons to denoise the data we record. In particular, using recent advances in machine learning, we can build models which harness this structure in the data to extract more interpretable signals. In this talk, we present two such methods as well as examples of how they can help us gain further insights into the neural underpinnings of behavior.
NMC4 Short Talk: Image embeddings informed by natural language improve predictions and understanding of human higher-level visual cortex
To better understand human scene understanding, we extracted features from images using CLIP, a neural network model of visual concept trained with supervision from natural language. We then constructed voxelwise encoding models to explain whole brain responses arising from viewing natural images from the Natural Scenes Dataset (NSD) - a large-scale fMRI dataset collected at 7T. Our results reveal that CLIP, as compared to convolution based image classification models such as ResNet or AlexNet, as well as language models such as BERT, gives rise to representations that enable better prediction performance - up to a 0.86 correlation with test data and an r-square of 0.75 - in higher-level visual cortex in humans. Moreover, CLIP representations explain distinctly unique variance in these higher-level visual areas as compared to models trained with only images or text. Control experiments show that the improvement in prediction observed with CLIP is not due to architectural differences (transformer vs. convolution) or to the encoding of image captions per se (vs. single object labels). Together our results indicate that CLIP and, more generally, multimodal models trained jointly on images and text, may serve as better candidate models of representation in human higher-level visual cortex. The bridge between language and vision provided by jointly trained models such as CLIP also opens up new and more semantically-rich ways of interpreting the visual brain.
A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice
Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.
Imaging neuronal morphology and activity pattern in developing cerebral cortex layer 4
Establishment of precise neuronal connectivity in the neocortex relies on activity-dependent circuit reorganization during postnatal development. In the mouse somatosensory cortex layer 4, barrels are arranged in one-to-one correspondence to whiskers on the face. Thalamocortical axon termini are clustered in the center of each barrel. The layer 4 spiny stellate neurons are located around the barrel edge, extend their dendrites primarily toward the barrel center, and make synapses with thalamocortical axons corresponding to a single whisker. These organized circuits are established during the first postnatal week through activity-dependent refinement processes. However, activity pattern regulating the circuit formation is still elusive. Using two-photon calcium imaging in living neonatal mice, we found that layer 4 neurons within the same barrel fire synchronously in the absence of peripheral stimulation, creating a ''patchwork'' pattern of spontaneous activity corresponding to the barrel map. We also found that disruption of GluN1, an obligatory subunit of the N-methyl-D-aspartate (NMDA) receptor, in a sparse population of layer 4 neurons reduced activity correlation between GluN1 knockout neuron pairs within a barrel. Our results provide evidence for the involvement of layer 4 neuron NMDA receptors in spatial organization of the spontaneous firing activity of layer 4 neurons in the neonatal barrel cortex. In the talk I will introduce our strategy to analyze the role of NMDA receptor-dependent correlated activity in the layer 4 circuit formation.
Context-Dependent Relationships between Locus Coeruleus Firing Patterns and Coordinated Neural Activity in the Anterior Cingulate Cortex
Ascending neuromodulatory projections from the locus coeruleus (LC) affect cortical neural networks via the release of norepinephrine (NE). However, the exact nature of these neuromodulatory effects on neural activity patterns in vivo is not well understood. Here we show that in awake monkeys, LC activation is associated with changes in coordinated activity patterns in the anterior cingulate cortex (ACC). These relationships, which are largely independent of changes in firing rates of individual ACC neurons, depend on the type of LC activation: ACC pairwise correlations tend to be reduced when tonic (baseline) LC activity increases but are enhanced when external events drive phasic LC responses. Both relationships covary with pupil changes that reflect LC activation and arousal. These results suggest that modulations of information processing that reflect changes in coordinated activity patterns in cortical networks can result partly from ongoing, context-dependent, arousal-related changes in activation of the LC-NE system.
Adaptation-driven sensory detection and sequence memory
Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds. We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus.
Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise
Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.
Multi-scale synaptic analysis for psychiatric/emotional disorders
Dysregulation of emotional processing and its integration with cognitive functions are central features of many mental/emotional disorders associated both with externalizing problems (aggressive, antisocial behaviors) and internalizing problems (anxiety, depression). As Dr. Joseph LeDoux, our invited speaker of this program, wrote in his famous book “Synaptic self: How Our Brains Become Who We Are”—the brain’s synapses—are the channels through which we think, act, imagine, feel, and remember. Synapses encode the essence of personality, enabling each of us to function as a distinctive, integrated individual from moment to moment. Thus, exploring the functioning of synapses leads to the understanding of the mechanism of (patho)physiological function of our brain. In this context, we have investigated the pathophysiology of psychiatric disorders, with particular emphasis on the synaptic function of model mice of various psychiatric disorders such as schizophrenia, autism, depression, and PTSD. Our current interest is how synaptic inputs are integrated to generate the action potential. Because the spatiotemporal organization of neuronal firing is crucial for information processing, but how thousands of inputs to the dendritic spines drive the firing remains a central question in neuroscience. We identified a distinct pattern of synaptic integration in the disease-related models, in which extra-large (XL) spines generate NMDA spikes within these spines, which was sufficient to drive neuronal firing. We experimentally and theoretically observed that XL spines negatively correlated with working memory. Our work offers a whole new concept for dendritic computation and network dynamics, and the understanding of psychiatric research will be greatly reconsidered. The second half of my talk is the development of a novel synaptic tool. Because, no matter how beautifully we can illuminate the spine morphology and how accurately we can quantify the synaptic integration, the links between synapse and brain function remain correlational. In order to challenge the causal relationship between synapse and brain function, we established AS-PaRac1, which is unique not only because it can specifically label and manipulate the recently potentiated dendritic spine (Hayashi-Takagi et al, 2015, Nature). With use of AS-PaRac1, we developed an activity-dependent simultaneous labeling of the presynaptic bouton and the potentiated spines to establish “functional connectomics” in a synaptic resolution. When we apply this new imaging method for PTSD model mice, we identified a completely new functional neural circuit of brain region A→B→C with a very strong S/N in the PTSD model mice. This novel tool of “functional connectomics” and its photo-manipulation could open up new areas of emotional/psychiatric research, and by extension, shed light on the neural networks that determine who we are.
Bridging brain and cognition: A multilayer network analysis of brain structural covariance and general intelligence in a developmental sample of struggling learners
Network analytic methods that are ubiquitous in other areas, such as systems neuroscience, have recently been used to test network theories in psychology, including intelligence research. The network or mutualism theory of intelligence proposes that the statistical associations among cognitive abilities (e.g. specific abilities such as vocabulary or memory) stem from causal relations among them throughout development. In this study, we used network models (specifically LASSO) of cognitive abilities and brain structural covariance (grey and white matter) to simultaneously model brain-behavior relationships essential for general intelligence in a large (behavioral, N=805; cortical volume, N=246; fractional anisotropy, N=165), developmental (ages 5-18) cohort of struggling learners (CALM). We found that mostly positive, small partial correlations pervade both our cognitive and neural networks. Moreover, calculating node centrality (absolute strength and bridge strength) and using two separate community detection algorithms (Walktrap and Clique Percolation), we found convergent evidence that subsets of both cognitive and neural nodes play an intermediary role between brain and behavior. We discuss implications and possible avenues for future studies.
Neuronal variability and spatiotemporal dynamics in cortical network models
Neuronal variability is a reflection of recurrent circuitry and cellular physiology. The modulation of neuronal variability is a reliable signature of cognitive and processing state. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional with all neurons fluctuating en masse. We show that the spatiotemporal dynamics in a spatially structured network produce large population-wide shared variability. When the spatial and temporal scales of inhibitory coupling match known physiology, model spiking neurons naturally generate low dimensional shared variability that captures in vivo population recordings along the visual pathway. Further, we show that firing rate models with spatial coupling can also generate chaotic and low-dimensional rate dynamics. The chaotic parameter region expands when the network is driven by correlated noisy inputs, while being insensitive to the intensity of independent noise.
Recurrent network dynamics lead to interference in sequential learning
Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.
A metabolic function of the hippocampal sharp wave-ripple
The hippocampal formation has been implicated in both cognitive functions as well as the sensing and control of endocrine states. To identify a candidate activity pattern which may link such disparate functions, we simultaneously measured electrophysiological activity from the hippocampus and interstitial glucose concentrations in the body of freely behaving rats. We found that clusters of sharp wave-ripples (SPW-Rs) recorded from both dorsal and ventral hippocampus reliably predicted a decrease in peripheral glucose concentrations within ~10 minutes. This correlation was less dependent on circadian, ultradian, and meal-triggered fluctuations, it could be mimicked with optogenetically induced ripples, and was attenuated by pharmacogenetically suppressing activity of the lateral septum, the major conduit between the hippocampus and subcortical structures. Our findings demonstrate that a novel function of the SPW-R is to modulate peripheral glucose homeostasis and offer a mechanism for the link between sleep disruption and blood glucose dysregulation seen in type 2 diabetes and obesity.
Neural circuit parameter variability, robustness, and homeostasis
Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.
Values Encoded in Orbitofrontal Cortex Are Causally Linked to Economic Choices
Classic economists proposed that economic choices rely on the computation and comparison of subjective values. This hypothesis continues to inform economic theory and experimental research, but behavioral measures are ultimately not sufficient to prove the proposal. Consistent with the hypothesis, when agents make choices, neurons in the orbitofrontal cortex (OFC) encode the subjective value of offered and chosen goods. Moreover, neuronal activity in this area suggests the formation of a decision. However, it is unclear whether these neural processes are causally related to choices. More generally, the evidence linking choices to value signals in the brain remains correlational. In my talk, I will present recent results showing that neuronal activity in OFC are causal to economic choices.
Correlations, chaos, and criticality in neural networks
The remarkable properties of information-processing of biological and of artificial neuronal networks alike arise from the interaction of large numbers of neurons. A central quest is thus to characterize their collective states. The directed coupling between pairs of neurons and their continuous dissipation of energy, moreover, cause dynamics of neuronal networks outside thermodynamic equilibrium. Tools from non-equilibrium statistical mechanics and field theory are thus instrumental to obtain a quantitative understanding. We here present progress with this recent approach [1]. On the experimental side, we show how correlations between pairs of neurons are informative on the dynamics of cortical networks: they are poised near a transition to chaos [2]. Close to this transition, we find prolongued sequential memory for past signals [3]. In the chaotic regime, networks offer representations of information whose dimensionality expands with time. We show how this mechanism aids classification performance [4]. Together these works illustrate the fruitful interplay between theoretical physics, neuronal networks, and neural information processing.
What about antibiotics for the treatment of the dyskinesia induced by L-DOPA?
L-DOPA-induced dyskinesia is a debilitating adverse effect of treating Parkinson’s disease with this drug. New therapeutic approaches that prevent or attenuate this side effect is clearly needed. Wistar adult male rats submitted to 6-hydroxydopamine-induced unilateral medial forebrain bundle lesions were treated with L-DOPA (oral or subcutaneous, 20 mg kg-1) once a day for 14 days. After this period, we tested if doxycycline (40 mg kg-1, intraperitoneal, a subantimicrobial dose) and COL-3 (50 and 100 nmol, intracerebroventricular) could reverse LID. In an additional experiment, doxycycline was also administered repeatedly with L-DOPA to verify if it would prevent LID development. A single injection of doxycycline or COL-3 together with L-DOPA attenuated the dyskinesia. Co-treatment with doxycycline from the first day of L-DOPA suppressed the onset of dyskinesia. The improved motor responses to L-DOPA remained intact in the presence of doxycycline or COL-3, indicating the preservation of L-DOPA-produced benefits. Doxycycline treatment was associated with decreased immunoreactivity of FosB, cyclooxygenase-2, the astroglial protein GFAP and the microglial protein OX-42 which are elevated in the basal ganglia of rats exhibiting dyskinesia. Doxycycline also decreased metalloproteinase-2/-9 activity, metalloproteinase-3 expression and reactive oxygen species production. Metalloproteinase-2/-9 activity and production of reactive oxygen species in the basal ganglia of dyskinetic rats showed a significant correlation with the intensity of dyskinesia. The present study demonstrates the anti-dyskinetic potential of doxycycline and its analog compound COL-3 in hemiparkinsonian rats. Given the long-established and safe clinical use of doxycycline, this study suggests that these drugs might be tested to reduce or to prevent L-DOPA-induced dyskinesia in Parkinson’s patients.
Is it Autism or Alexithymia? explaining atypical socioemotional processing
Emotion processing is thought to be impaired in autism and linked to atypical visual exploration and arousal modulation to others faces and gaze, yet evidence is equivocal. We propose that, where observed, atypical socioemotional processing is due to alexithymia, a distinct but frequently co-occurring condition which affects emotional self-awareness and Interoception. In study 1 (N = 80), we tested this hypothesis by studying the spatio-temporal dynamics and entropy of eye-gaze during emotion processing tasks. Evidence from traditional and novel methods revealed that atypical eye-gaze and emotion recognition is best predicted by alexithymia in both autistic and non-autistic individuals. In Study 2 (N = 70), we assessed interoceptive and autonomic signals implicated in socioemotional processing, and found evidence for alexithymia (not autism) driven effects on gaze and arousal modulation to emotions. We also conducted two large-scale studies (N = 1300), using confirmatory factor-analytic and network modelling and found evidence that Alexithymia and Autism are distinct at both a latent level and their intercorrelations. We argue that: 1) models of socioemotional processing in autism should conceptualise difficulties as intrinsic to alexithymia, and 2) assessment of alexithymia is crucial for diagnosis and personalised interventions in autism.
Cross-correlation--response relation for spike-driven neurons
Bernstein Conference 2024
Exploring behavioral correlations with neuron activity through synaptic plasticity.
Bernstein Conference 2024
Gradient and network~structure of lagged correlations\\in band-limited cortical dynamics
Bernstein Conference 2024
Correlation-based motion detectors in olfaction enable turbulent plume navigation
COSYNE 2022
Frontal cortex neural correlations are reduced in the transformation to movement
COSYNE 2022
Frontal cortex neural correlations are reduced in the transformation to movement
COSYNE 2022
Input correlations impede suppression of chaos and learning in balanced rate networks
COSYNE 2022
Input correlations impede suppression of chaos and learning in balanced rate networks
COSYNE 2022
Normative models of spatio-spectral decorrelation in natural scenes predict experimentally observed ratio of PR types
COSYNE 2022
Normative models of spatio-spectral decorrelation in natural scenes predict experimentally observed ratio of PR types
COSYNE 2022
Information correlations reduce the accuracy of pioneering normative decision makers
COSYNE 2023
Learning beyond the synapse: activity-dependent myelination, neural correlations, and information transfer
COSYNE 2023
Rapid fluctuations in multi-scale correlations of cortical networks encode spontaneous behavior
COSYNE 2023
Reduced correlations in spontaneous activity amongst CA1 engram cells
COSYNE 2023
A simple method to improve regression and correlation coefficient estimates with noisy neural data
COSYNE 2023
Tuned inhibition explains strong correlations across segregated excitatory subnetworks
COSYNE 2023
From Chaos to Coherence: Impact of High-Order Correlations on Neural Dynamics
COSYNE 2025
Glial ensheathment of inhibitory synapses drives hyperactivity and increases correlations
COSYNE 2025
Humans can use positive and negative spectrotemporal correlations to detect rising and falling pitch
COSYNE 2025
Quantification of nonsense-free correlation uncovers the interaction between top-down and bottom-up sources of behavioral correlation in mouse V1
COSYNE 2025
State-dependent mapping of correlations of subthreshold to spiking activity is expansive in L1 inhibitory circuits
COSYNE 2025
Brain-wide effects of cannabinoids, measured by functional ultrasound imaging, show strong correlation with CB1R activation and behavior in awake mice
FENS Forum 2024
Correlation between neuroprotection in a tauopathy environment following trauma and delayed astrogliosis in a cohort study
FENS Forum 2024
A correlation study between brain lesions and severity of the epileptic syndrome in the pilocarpine model of temporal lobe epilepsy
FENS Forum 2024
Correlation between motoneuronal survival and VEGF expression in brainstem motoneurons in the SOD1 ALS murine model
FENS Forum 2024
Correlations of molecularly defined cortical interneuron populations with morpho-electric properties
FENS Forum 2024
Correlations in neuromodulatory codes during different learning processes
FENS Forum 2024
Excitatory-inhibitory balance assessed by aperiodic component and its correlation with paired-pulse inhibition in the primary somatosensory cortex: An MEG study
FENS Forum 2024
Excitatory and inhibitory synapses show a tight subcellular correlation that weakens over development
FENS Forum 2024
A novel correlation-based method for quantifying membrane-associated periodic skeleton periodicity
FENS Forum 2024
Oscillatory waveform shape and temporal spike correlations differ in the bat frontal and auditory cortices
FENS Forum 2024
Study of the effect of positive and negative correlations on functional connectivity disruption in MCI
Neuromatch 5
correlation coverage
82 items