← Back

Correlations

Topic spotlight
TopicWorld Wide

correlations

Discover seminars, jobs, and research tagged with correlations across World Wide.
55 curated items36 Seminars19 ePosters
Updated 10 months ago
55 items · correlations
55 results
SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 11, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 28, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine

Nelson Spruston
Janelia, Ashburn, USA
Mar 5, 2024

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

SeminarNeuroscience

In vivo direct imaging of neuronal activity at high temporospatial resolution

Jang-Yeon Park
Sungkyunkwan University, Suwon, Korea
Jun 27, 2023

Advanced noninvasive neuroimaging methods provide valuable information on the brain function, but they have obvious pros and cons in terms of temporal and spatial resolution. Functional magnetic resonance imaging (fMRI) using blood-oxygenation-level-dependent (BOLD) effect provides good spatial resolution in the order of millimeters, but has a poor temporal resolution in the order of seconds due to slow hemodynamic responses to neuronal activation, providing indirect information on neuronal activity. In contrast, electroencephalography (EEG) and magnetoencephalography (MEG) provide excellent temporal resolution in the millisecond range, but spatial information is limited to centimeter scales. Therefore, there has been a longstanding demand for noninvasive brain imaging methods capable of detecting neuronal activity at both high temporal and spatial resolution. In this talk, I will introduce a novel approach that enables Direct Imaging of Neuronal Activity (DIANA) using MRI that can dynamically image neuronal spiking activity in milliseconds precision, achieved by data acquisition scheme of rapid 2D line scan synchronized with periodically applied functional stimuli. DIANA was demonstrated through in vivo mouse brain imaging on a 9.4T animal scanner during electrical whisker-pad stimulation. DIANA with milliseconds temporal resolution had high correlations with neuronal spike activities, which could also be applied in capturing the sequential propagation of neuronal activity along the thalamocortical pathway of brain networks. In terms of the contrast mechanism, DIANA was almost unaffected by hemodynamic responses, but was subject to changes in membrane potential-associated tissue relaxation times such as T2 relaxation time. DIANA is expected to break new ground in brain science by providing an in-depth understanding of the hierarchical functional organization of the brain, including the spatiotemporal dynamics of neural networks.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 21, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarPsychology

The speaker identification ability of blind and sighted listeners

Almut Braun
Bundeskriminalamt, Wiesbaden
Feb 21, 2023

Previous studies have shown that blind individuals outperform sighted controls in a variety of auditory tasks; however, only few studies have investigated blind listeners’ speaker identification abilities. In addition, existing studies in the area show conflicting results. The presented empirical investigation with 153 blind (74 of them congenitally blind) and 153 sighted listeners is the first of its kind and scale in which long-term memory effects of blind listeners’ speaker identification abilities are examined. For the empirical investigation, all listeners were evenly assigned to one of nine subgroups (3 x 3 design) in order to investigate the influence of two parameters with three levels, respectively, on blind and sighted listeners’ speaker identification performance. The parameters were a) time interval; i.e. a time interval of 1, 3 or 6 weeks between the first exposure to the voice to be recognised (familiarisation) and the speaker identification task (voice lineup); and b) signal quality; i.e. voice recordings were presented in either studio-quality, mobile phone-quality or as recordings of whispered speech. Half of the presented voice lineups were target-present lineups in which the previously heard target voice was included. The other half consisted of target-absent lineups which contained solely distractor voices. Blind individuals outperformed sighted listeners only under studio quality conditions. Furthermore, for blind and sighted listeners no significant performance differences were found with regard to the three investigated time intervals of 1, 3 and 6 weeks. Blind as well as sighted listeners were significantly better at picking the target voice from target-present lineups than at indicating that the target voice was absent in target-absent lineups. Within the blind group, no significant correlations were found between identification performance and onset or duration of blindness. Implications for the field of forensic phonetics are discussed.

SeminarPsychology

How do visual abilities relate to each other?

Simona Garobbio
EPFL
Dec 6, 2022

In vision, there is, surprisingly, very little evidence of common factors. Most studies have found only weak correlations between performance in different visual tests; meaning that, a participant performing better in one test is not more likely to perform also better in another test. Likewise in ageing, cross-sectional studies have repeatedly shown that older adults show deteriorated performance in most visual tests compared to young adults. However, within the older population, there is no evidence for a common factor underlying visual abilities. To investigate further the decline of visual abilities, we performed a longitudinal study with a battery of nine visual tasks three times, with two re-tests after about 4 and 7 years. Most visual abilities are rather stable across 7 years, but not visual acuity. I will discuss possible causes of these paradoxical outcomes.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscience

An investigation of perceptual biases in spiking recurrent neural networks trained to discriminate time intervals

Nestor Parga
Autonomous University of Madrid (Universidad Autónoma de Madrid), Spain
Jun 7, 2022

Magnitude estimation and stimulus discrimination tasks are affected by perceptual biases that cause the stimulus parameter to be perceived as shifted toward the mean of its distribution. These biases have been extensively studied in psychophysics and, more recently and to a lesser extent, with neural activity recordings. New computational techniques allow us to train spiking recurrent neural networks on the tasks used in the experiments. This provides us with another valuable tool with which to investigate the network mechanisms responsible for the biases and how behavior could be modeled. As an example, in this talk I will consider networks trained to discriminate the durations of temporal intervals. The trained networks presented the contraction bias, even though they were trained with a stimulus sequence without temporal correlations. The neural activity during the delay period carried information about the stimuli of the current trial and previous trials, this being one of the mechanisms that originated the contraction bias. The population activity described trajectories in a low-dimensional space and their relative locations depended on the prior distribution. The results can be modeled as an ideal observer that during the delay period sees a combination of the current and the previous stimuli. Finally, I will describe how the neural trajectories in state space encode an estimate of the interval duration. The approach could be applied to other cognitive tasks.

SeminarNeuroscienceRecording

Computation in the neuronal systems close to the critical point

Anna Levina
Universität Tübingen
Apr 28, 2022

It was long hypothesized that natural systems might take advantage of the extended temporal and spatial correlations close to the critical point to improve their computational capabilities. However, on the other side, different distances to criticality were inferred from the recordings of nervous systems. In my talk, I discuss how including additional constraints on the processing time can shift the optimal operating point of the recurrent networks. Moreover, the data from the visual cortex of the monkeys during the attentional task indicate that they flexibly change the closeness to the critical point of the local activity. Overall it suggests that, as we would expect from common sense, the optimal state depends on the task at hand, and the brain adapts to it in a local and fast manner.

SeminarNeuroscienceRecording

A transcriptomic axis predicts state modulation of cortical interneurons

Stephane Bugeon
Harris & Carandini's lab, UCL
Apr 26, 2022

Transcriptomics has revealed that cortical inhibitory neurons exhibit a great diversity of fine molecular subtypes, but it is not known whether these subtypes have correspondingly diverse activity patterns in the living brain. We show that inhibitory subtypes in primary visual cortex (V1) have diverse correlates with brain state, but that this diversity is organized by a single factor: position along their main axis of transcriptomic variation. We combined in vivo 2-photon calcium imaging of mouse V1 with a novel transcriptomic method to identify mRNAs for 72 selected genes in ex vivo slices. We classified inhibitory neurons imaged in layers 1-3 into a three-level hierarchy of 5 Subclasses, 11 Types, and 35 Subtypes using previously-defined transcriptomic clusters. Responses to visual stimuli differed significantly only across Subclasses, suppressing cells in the Sncg Subclass while driving cells in the other Subclasses. Modulation by brain state differed at all hierarchical levels but could be largely predicted from the first transcriptomic principal component, which also predicted correlations with simultaneously recorded cells. Inhibitory Subtypes that fired more in resting, oscillatory brain states have less axon in layer 1, narrower spikes, lower input resistance and weaker adaptation as determined in vitro and express more inhibitory cholinergic receptors. Subtypes firing more during arousal had the opposite properties. Thus, a simple principle may largely explain how diverse inhibitory V1 Subtypes shape state-dependent cortical processing.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 22, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarPsychology

Commonly used face cognition tests yield low reliability and inconsistent performance: Implications for test design, analysis, and interpretation of individual differences data

Anna Bobak & Alex Jones
University of Stirling & Swansea University
Jan 19, 2022

Unfamiliar face processing (face cognition) ability varies considerably in the general population. However, the means of its assessment are not standardised, and selected laboratory tests vary between studies. It is also unclear whether 1) the most commonly employed tests are reliable, 2) participants show a degree of consistency in their performance, 3) and the face cognition tests broadly measure one underlying ability, akin to general intelligence. In this study, we asked participants to perform eight tests frequently employed in the individual differences literature. We examined the reliability of these tests, relationships between them, consistency in participants’ performance, and used data driven approaches to determine factors underpinning performance. Overall, our findings suggest that the reliability of these tests is poor to moderate, the correlations between them are weak, the consistency in participant performance across tasks is low and that performance can be broadly split into two factors: telling faces together, and telling faces apart. We recommend that future studies adjust analyses to account for stimuli (face images) and participants as random factors, routinely assess reliability, and that newly developed tests of face cognition are examined in the context of convergent validity with other commonly used measures of face cognition ability.

SeminarNeuroscienceRecording

Does human perception rely on probabilistic message passing?

Alex Hyafil
CRM, Barcelona
Dec 21, 2021

The idea that perception in humans relies on some form of probabilistic computations has become very popular over the last decades. It has been extremely difficult however to characterize the extent and the nature of the probabilistic representations and operations that are manipulated by neural populations in the human cortex. Several theoretical works suggest that probabilistic representations are present from low-level sensory areas to high-level areas. According to this view, the neural dynamics implements some forms of probabilistic message passing (i.e. neural sampling, probabilistic population coding, etc.) which solves the problem of perceptual inference. Here I will present recent experimental evidence that human and non-human primate perception implements some form of message passing. I will first review findings showing probabilistic integration of sensory evidence across space and time in primate visual cortex. Second, I will show that the confidence reports in a hierarchical task reveal that uncertainty is represented both at lower and higher levels, in a way that is consistent with probabilistic message passing both from lower to higher and from higher to lower representations. Finally, I will present behavioral and neural evidence that human perception takes into account pairwise correlations in sequences of sensory samples in agreement with the message passing hypothesis, and against standard accounts such as accumulation of sensory evidence or predictive coding.

SeminarNeuroscienceRecording

Inferring informational structures in neural recordings of drosophila with epsilon-machines

Roberto Muñoz
Monash University
Dec 9, 2021

Measuring the degree of consciousness an organism possesses has remained a longstanding challenge in Neuroscience. In part, this is due to the difficulty of finding the appropriate mathematical tools for describing such a subjective phenomenon. Current methods relate the level of consciousness to the complexity of neural activity, i.e., using the information contained in a stream of recorded signals they can tell whether the subject might be awake, asleep, or anaesthetised. Usually, the signals stemming from a complex system are correlated in time; the behaviour of the future depends on the patterns in the neural activity of the past. However these past-future relationships remain either hidden to, or not taken into account in the current measures of consciousness. These past-future correlations are likely to contain more information and thus can reveal a richer understanding about the behaviour of complex systems like a brain. Our work employs the "epsilon-machines” framework to account for the time correlations in neural recordings. In a nutshell, epsilon-machines reveal how much of the past neural activity is needed in order to accurately predict how the activity in the future will behave, and this is summarised in a single number called "statistical complexity". If a lot of past neural activity is required to predict the future behaviour, then can we say that the brain was more “awake" at the time of recording? Furthermore, if we read the recordings in reverse, does the difference between forward and reverse-time statistical complexity allow us to quantify the level of time asymmetry in the brain? Neuroscience predicts that there should be a degree of time asymmetry in the brain. However, this has never been measured. To test this, we used neural recordings measured from the brains of fruit flies and inferred the epsilon-machines. We found that the nature of the past and future correlations of neural activity in the brain, drastically changes depending on whether the fly was awake or anaesthetised. Not only does our study find that wakeful and anaesthetised fly brains are distinguished by how statistically complex they are, but that the amount of correlations in wakeful fly brains was much more sensitive to whether the neural recordings were read forward vs. backwards in time, compared to anaesthetised brains. In other words, wakeful fly brains were more complex, and time asymmetric than anaesthetised ones.

SeminarNeuroscience

A nonlinear shot noise model for calcium-based synaptic plasticity

Bin Wang
Aljadeff lab, University of California San Diego, USA
Dec 8, 2021

Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.

SeminarNeuroscience

Individual differences in visual (mis)perception: a multivariate statistical approach

Aline Cretenoud
Laboratory of Psychophysics, BMI, SV, EPFL
Dec 7, 2021

Common factors are omnipresent in everyday life, e.g., it is widely held that there is a common factor g for intelligence. In vision, however, there seems to be a multitude of specific factors rather than a strong and unique common factor. In my thesis, I first examined the multidimensionality of the structure underlying visual illusions. To this aim, the susceptibility to various visual illusions was measured. In addition, subjects were tested with variants of the same illusion, which differed in spatial features, luminance, orientation, or contextual conditions. Only weak correlations were observed between the susceptibility to different visual illusions. An individual showing a strong susceptibility to one visual illusion does not necessarily show a strong susceptibility to other visual illusions, suggesting that the structure underlying visual illusions is multifactorial. In contrast, there were strong correlations between the susceptibility to variants of the same illusion. Hence, factors seem to be illusion-specific but not feature-specific. Second, I investigated whether a strong visual factor emerges in healthy elderly and patients with schizophrenia, which may be expected from the general decline in perceptual abilities usually reported in these two populations compared to healthy young adults. Similarly, a strong visual factor may emerge in action video gamers, who often show enhanced perceptual performance compared to non-video gamers. Hence, healthy elderly, patients with schizophrenia, and action video gamers were tested with a battery of visual tasks, such as a contrast detection and orientation discrimination task. As in control groups, between-task correlations were weak in general, which argues against the emergence of a strong common factor for vision in these populations. While similar tasks are usually assumed to rely on similar neural mechanisms, the performances in different visual tasks were only weakly related to each other, i.e., performance does not generalize across visual tasks. These results highlight the relevance of an individual differences approach to unravel the multidimensionality of the visual structure.

SeminarNeuroscience

Finding needles in the neural haystack: unsupervised analyses of noisy data

Marine Schimel & Kris Jensen
University of Cambridge, Department of Engineering
Nov 30, 2021

In modern neuroscience, we often want to extract information from recordings of many neurons in the brain. Unfortunately, the activity of individual neurons is very noisy, making it difficult to relate to cognition and behavior. Thankfully, we can use the correlations across time and neurons to denoise the data we record. In particular, using recent advances in machine learning, we can build models which harness this structure in the data to extract more interpretable signals. In this talk, we present two such methods as well as examples of how they can help us gain further insights into the neural underpinnings of behavior.

SeminarNeuroscience

A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice

David Liu
University of Cambridge
Oct 26, 2021

Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.

SeminarNeuroscienceRecording

Adaptation-driven sensory detection and sequence memory

André Longtin
University of Ottawa
Oct 5, 2021

Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds. We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus.

SeminarNeuroscience

Bridging brain and cognition: A multilayer network analysis of brain structural covariance and general intelligence in a developmental sample of struggling learners

Ivan Simpson-Kent
University of Cambridge, MRC CBU
Jun 1, 2021

Network analytic methods that are ubiquitous in other areas, such as systems neuroscience, have recently been used to test network theories in psychology, including intelligence research. The network or mutualism theory of intelligence proposes that the statistical associations among cognitive abilities (e.g. specific abilities such as vocabulary or memory) stem from causal relations among them throughout development. In this study, we used network models (specifically LASSO) of cognitive abilities and brain structural covariance (grey and white matter) to simultaneously model brain-behavior relationships essential for general intelligence in a large (behavioral, N=805; cortical volume, N=246; fractional anisotropy, N=165), developmental (ages 5-18) cohort of struggling learners (CALM). We found that mostly positive, small partial correlations pervade both our cognitive and neural networks. Moreover, calculating node centrality (absolute strength and bridge strength) and using two separate community detection algorithms (Walktrap and Clique Percolation), we found convergent evidence that subsets of both cognitive and neural nodes play an intermediary role between brain and behavior. We discuss implications and possible avenues for future studies.

SeminarPsychology

The Jena Voice Learning and Memory Test (JVLMT)

Romi Zäske
University of Jena
May 26, 2021

The ability to recognize someone’s voice spans a broad spectrum with phonagnosia on the low end and super recognition at the high end. Yet there is no standardized test to measure the individual ability to learn and recognize newly-learnt voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 20 min-test based on item response theory and applicable across different languages. The JVLMT consists of three phases in which participants are familiarized with eight speakers in two stages and then perform a three-alternative forced choice recognition task, using pseudo sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with different levels of difficulty. Test scores are based on 22 Rasch-conform items. Items were selected and validated in online studies based on 232 and 454 participants, respectively. Mean accuracy is 0.51 with an SD of .18. The JVLMT showed high and moderate correlations with convergent validation tests (Bangor Voice Matching Test; Glasgow Voice Memory Test) and a weak correlation with a discriminant validation test (Digit Span). Empirical (marginal) reliability is 0.66. Four participants with super recognition (at least 2 SDs above the mean) and 7 participants with phonagnosia (at least 2 SDs below the mean) were identified. The JVLMT is a promising screen too for voice recognition abilities in a scientific and neuropsychological context.

SeminarNeuroscienceRecording

Neuronal variability and spatiotemporal dynamics in cortical network models

Chengcheng Huang
University of Pittsburgh
May 18, 2021

Neuronal variability is a reflection of recurrent circuitry and cellular physiology. The modulation of neuronal variability is a reliable signature of cognitive and processing state. A pervasive yet puzzling feature of cortical circuits is that despite their complex wiring, population-wide shared spiking variability is low dimensional with all neurons fluctuating en masse. We show that the spatiotemporal dynamics in a spatially structured network produce large population-wide shared variability. When the spatial and temporal scales of inhibitory coupling match known physiology, model spiking neurons naturally generate low dimensional shared variability that captures in vivo population recordings along the visual pathway. Further, we show that firing rate models with spatial coupling can also generate chaotic and low-dimensional rate dynamics. The chaotic parameter region expands when the network is driven by correlated noisy inputs, while being insensitive to the intensity of independent noise.

SeminarNeuroscienceRecording

Recurrent network dynamics lead to interference in sequential learning

Friedrich Schuessler
Barak lab, Technion, Haifa, Israel
Apr 28, 2021

Learning in real life is often sequential: A learner first learns task A, then task B. If the tasks are related, the learner may adapt the previously learned representation instead of generating a new one from scratch. Adaptation may ease learning task B but may also decrease the performance on task A. Such interference has been observed in experimental and machine learning studies. In the latter case, it is mediated by correlations between weight updates for the different tasks. In typical applications, like image classification with feed-forward networks, these correlated weight updates can be traced back to input correlations. For many neuroscience tasks, however, networks need to not only transform the input, but also generate substantial internal dynamics. Here we illuminate the role of internal dynamics for interference in recurrent neural networks (RNNs). We analyze RNNs trained sequentially on neuroscience tasks with gradient descent and observe forgetting even for orthogonal tasks. We find that the degree of interference changes systematically with tasks properties, especially with emphasis on input-driven over autonomously generated dynamics. To better understand our numerical observations, we thoroughly analyze a simple model of working memory: For task A, a network is presented with an input pattern and trained to generate a fixed point aligned with this pattern. For task B, the network has to memorize a second, orthogonal pattern. Adapting an existing representation corresponds to the rotation of the fixed point in phase space, as opposed to the emergence of a new one. We show that the two modes of learning – rotation vs. new formation – are directly linked to recurrent vs. input-driven dynamics. We make this notion precise in a further simplified, analytically tractable model, where learning is restricted to a 2x2 matrix. In our analysis of trained RNNs, we also make the surprising observation that, across different tasks, larger random initial connectivity reduces interference. Analyzing the fixed point task reveals the underlying mechanism: The random connectivity strongly accelerates the learning mode of new formation, and has less effect on rotation. The prior thus wins the race to zero loss, and interference is reduced. Altogether, our work offers a new perspective on sequential learning in recurrent networks, and the emphasis on internally generated dynamics allows us to take the history of individual learners into account.

SeminarNeuroscience

Neural circuit parameter variability, robustness, and homeostasis

Astrid Prinz
Emory University
Mar 11, 2021

Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.

SeminarNeuroscienceRecording

Correlations, chaos, and criticality in neural networks

Moritz Helias
Juelich Research Center
Dec 15, 2020

The remarkable properties of information-processing of biological and of artificial neuronal networks alike arise from the interaction of large numbers of neurons. A central quest is thus to characterize their collective states. The directed coupling between pairs of neurons and their continuous dissipation of energy, moreover, cause dynamics of neuronal networks outside thermodynamic equilibrium. Tools from non-equilibrium statistical mechanics and field theory are thus instrumental to obtain a quantitative understanding. We here present progress with this recent approach [1]. On the experimental side, we show how correlations between pairs of neurons are informative on the dynamics of cortical networks: they are poised near a transition to chaos [2]. Close to this transition, we find prolongued sequential memory for past signals [3]. In the chaotic regime, networks offer representations of information whose dimensionality expands with time. We show how this mechanism aids classification performance [4]. Together these works illustrate the fruitful interplay between theoretical physics, neuronal networks, and neural information processing.

SeminarNeuroscienceRecording

Is it Autism or Alexithymia? explaining atypical socioemotional processing

Hélio Clemente Cuve
University of Oxford
Nov 30, 2020

Emotion processing is thought to be impaired in autism and linked to atypical visual exploration and arousal modulation to others faces and gaze, yet evidence is equivocal. We propose that, where observed, atypical socioemotional processing is due to alexithymia, a distinct but frequently co-occurring condition which affects emotional self-awareness and Interoception. In study 1 (N = 80), we tested this hypothesis by studying the spatio-temporal dynamics and entropy of eye-gaze during emotion processing tasks. Evidence from traditional and novel methods revealed that atypical eye-gaze and emotion recognition is best predicted by alexithymia in both autistic and non-autistic individuals. In Study 2 (N = 70), we assessed interoceptive and autonomic signals implicated in socioemotional processing, and found evidence for alexithymia (not autism) driven effects on gaze and arousal modulation to emotions. We also conducted two large-scale studies (N = 1300), using confirmatory factor-analytic and network modelling and found evidence that Alexithymia and Autism are distinct at both a latent level and their intercorrelations. We argue that: 1) models of socioemotional processing in autism should conceptualise difficulties as intrinsic to alexithymia, and 2) assessment of alexithymia is crucial for diagnosis and personalised interventions in autism.

SeminarPhysics of LifeRecording

Neural network-like collective dynamics in molecules

Arvind Murugan
University of Chicago
Nov 26, 2020

Neural networks can learn and recognize subtle correlations in high dimensional inputs. However, neural networks are simply many-body systems with strong non-linearities and disordered interactions. Hence, many-body physical systems with similar interactions should be able to show neural network-like behavior. Here we show neural network-like behavior in the nucleation dynamics of promiscuously interacting molecules with multiple stable crystalline phases. Using a combination of theory and experiments, we show how the physics of the system dictates relationships between the difficulty of the pattern recognition task solved, time taken and accuracy. This work shows that high dimensional pattern recognition and learning are not special to software algorithms but can be achieved by the collective dynamics of sufficiently disordered molecular systems.

SeminarNeuroscience

Rapid State Changes Account for Apparent Brain and Behavior Variability

David McCormick
University of Oregon
Sep 16, 2020

Neural and behavioral responses to sensory stimuli are notoriously variable from trial to trial. Does this mean the brain is inherently noisy or that we don’t completely understand the nature of the brain and behavior? Here we monitor the state of activity of the animal through videography of the face, including pupil and whisker movements, as well as walking, while also monitoring the ability of the animal to perform a difficult auditory or visual task. We find that the state of the animal is continuously changing and is never stable. The animal is constantly becoming more or less activated (aroused) on a second and subsecond scale. These changes in state are reflected in all of the neural systems we have measured, including cortical, thalamic, and neuromodulatory activity. Rapid changes in cortical activity are highly correlated with changes in neural responses to sensory stimuli and the ability of the animal to perform auditory or visual detection tasks. On the intracellular level, these changes in forebrain activity are associated with large changes in neuronal membrane potential and the nature of network activity (e.g. from slow rhythm generation to sustained activation and depolarization). Monitoring cholinergic and noradrenergic axonal activity reveals widespread correlations across the cortex. However, we suggest that a significant component of these rapid state changes arise from glutamatergic pathways (e.g. corticocortical or thalamocortical), owing to their rapidity. Understanding the neural mechanisms of state-dependent variations in brain and behavior promises to significantly “denoise” our understanding of the brain.

ePoster

Exploring behavioral correlations with neuron activity through synaptic plasticity.

Arnaud HUBERT, Charlotte PIETTE, Sylvie PEREZ, Hugues BERRY, Jonathan TOUBOUL, Laurent VENANCE

Bernstein Conference 2024

ePoster

Gradient and network~structure of lagged correlations\\in band-limited cortical dynamics

Paul Hege, Markus Siegel

Bernstein Conference 2024

ePoster

Frontal cortex neural correlations are reduced in the transformation to movement

COSYNE 2022

ePoster

Frontal cortex neural correlations are reduced in the transformation to movement

COSYNE 2022

ePoster

Input correlations impede suppression of chaos and learning in balanced rate networks

COSYNE 2022

ePoster

Input correlations impede suppression of chaos and learning in balanced rate networks

COSYNE 2022

ePoster

Information correlations reduce the accuracy of pioneering normative decision makers

Zachary Kilpatrick, Megan Stickler, Bhargav Karamched, William Ott, Krešimir Josić

COSYNE 2023

ePoster

Learning beyond the synapse: activity-dependent myelination, neural correlations, and information transfer

Jeremie Lefebvre & Afroditi Talidou

COSYNE 2023

ePoster

Rapid fluctuations in multi-scale correlations of cortical networks encode spontaneous behavior

Hadas Benisty, Daniel Barson, Andrew Moberly, Sweyta Lohani, Ronald Coifman, Michael Crair, Gal Mishne, Jessica Cardin, Michael Higley

COSYNE 2023

ePoster

Reduced correlations in spontaneous activity amongst CA1 engram cells

Amy Monasterio, Gabriel Ocker, Steve Ramirez, Benjamin Scott

COSYNE 2023

ePoster

Tuned inhibition explains strong correlations across segregated excitatory subnetworks

Matthew Getz, Gregory Handy, Alex Negrón, Brent Doiron

COSYNE 2023

ePoster

From Chaos to Coherence: Impact of High-Order Correlations on Neural Dynamics

Nimrod Sherf, Kresimir Josic, Xaq Pitkow, Kevin Bassler

COSYNE 2025

ePoster

Glial ensheathment of inhibitory synapses drives hyperactivity and increases correlations

Nellie Garcia, Gregory Handy

COSYNE 2025

ePoster

Humans can use positive and negative spectrotemporal correlations to detect rising and falling pitch

Parisa Vaziri, Damon Clark, Samuel McDougle

COSYNE 2025

ePoster

State-dependent mapping of correlations of subthreshold to spiking activity is expansive in L1 inhibitory circuits

Christoph Miehl, Yitong Qi, Adam Cohen, Brent Doiron

COSYNE 2025

ePoster

Correlations of molecularly defined cortical interneuron populations with morpho-electric properties

Yongchun Yu

FENS Forum 2024

ePoster

Correlations in neuromodulatory codes during different learning processes

Bálint Király, Annamária Benke, Vivien Pillár, Franciska Benyó, Írisz Szabó, Balázs Hangya

FENS Forum 2024

ePoster

Oscillatory waveform shape and temporal spike correlations differ in the bat frontal and auditory cortices

Francisco Garcia-Rosales, Natalie Schaworonkow, Julio C. Hechavarria

FENS Forum 2024

ePoster

Study of the effect of positive and negative correlations on functional connectivity disruption in MCI

Ignacio Taguas

Neuromatch 5