← Back

Spiking Activity

Topic spotlight
TopicWorld Wide

spiking activity

Discover seminars, jobs, and research tagged with spiking activity across World Wide.
17 curated items12 Seminars5 ePosters
Updated over 1 year ago
17 items · spiking activity
17 results
SeminarNeuroscience

In vivo direct imaging of neuronal activity at high temporospatial resolution

Jang-Yeon Park
Sungkyunkwan University, Suwon, Korea
Jun 27, 2023

Advanced noninvasive neuroimaging methods provide valuable information on the brain function, but they have obvious pros and cons in terms of temporal and spatial resolution. Functional magnetic resonance imaging (fMRI) using blood-oxygenation-level-dependent (BOLD) effect provides good spatial resolution in the order of millimeters, but has a poor temporal resolution in the order of seconds due to slow hemodynamic responses to neuronal activation, providing indirect information on neuronal activity. In contrast, electroencephalography (EEG) and magnetoencephalography (MEG) provide excellent temporal resolution in the millisecond range, but spatial information is limited to centimeter scales. Therefore, there has been a longstanding demand for noninvasive brain imaging methods capable of detecting neuronal activity at both high temporal and spatial resolution. In this talk, I will introduce a novel approach that enables Direct Imaging of Neuronal Activity (DIANA) using MRI that can dynamically image neuronal spiking activity in milliseconds precision, achieved by data acquisition scheme of rapid 2D line scan synchronized with periodically applied functional stimuli. DIANA was demonstrated through in vivo mouse brain imaging on a 9.4T animal scanner during electrical whisker-pad stimulation. DIANA with milliseconds temporal resolution had high correlations with neuronal spike activities, which could also be applied in capturing the sequential propagation of neuronal activity along the thalamocortical pathway of brain networks. In terms of the contrast mechanism, DIANA was almost unaffected by hemodynamic responses, but was subject to changes in membrane potential-associated tissue relaxation times such as T2 relaxation time. DIANA is expected to break new ground in brain science by providing an in-depth understanding of the hierarchical functional organization of the brain, including the spatiotemporal dynamics of neural networks.

SeminarNeuroscience

Precise spatio-temporal spike patterns in cortex and model

Sonia Gruen
Forschungszentrum Jülich, Germany
Apr 25, 2023

The cell assembly hypothesis postulates that groups of coordinated neurons form the basis of information processing. Here, we test this hypothesis by analyzing massively parallel spiking activity recorded in monkey motor cortex during a reach-to-grasp experiment for the presence of significant ms-precise spatio-temporal spike patterns (STPs). For this purpose, the parallel spike trains were analyzed for STPs by the SPADE method (Stella et al, 2019, Biosystems), which detects, counts and evaluates spike patterns for their significance by the use of surrogates (Stella et al, 2022 eNeuro). As a result we find STPs in 19/20 data sets (each of 15min) from two monkeys, but only a small fraction of the recorded neurons are involved in STPs. To consider the different behavioral states during the task, we analyzed the data in a quasi time-resolved analysis by dividing the data into behaviorally relevant time epochs. The STPs that occur in the various epochs are specific to behavioral context - in terms of neurons involved and temporal lags between the spikes of the STP. Furthermore we find, that the STPs often share individual neurons across epochs. Since we interprete the occurrence of a particular STP as the signature of a particular active cell assembly, our interpretation is that the neurons multiplex their cell assembly membership. In a related study, we model these findings by networks with embedded synfire chains (Kleinjohann et al, 2022, bioRxiv 2022.08.02.502431).

SeminarNeuroscienceRecording

Neural networks in the replica-mean field limits

Thibaud Taillefumier
The University of Texas at Austin
Nov 29, 2022

In this talk, we propose to decipher the activity of neural networks via a “multiply and conquer” approach. This approach considers limit networks made of infinitely many replicas with the same basic neural structure. The key point is that these so-called replica-mean-field networks are in fact simplified, tractable versions of neural networks that retain important features of the finite network structure of interest. The finite size of neuronal populations and synaptic interactions is a core determinant of neural dynamics, being responsible for non-zero correlation in the spiking activity and for finite transition rates between metastable neural states. Theoretically, we develop our replica framework by expanding on ideas from the theory of communication networks rather than from statistical physics to establish Poissonian mean-field limits for spiking networks. Computationally, we leverage our original replica approach to characterize the stationary spiking activity of various network models via reduction to tractable functional equations. We conclude by discussing perspectives about how to use our replica framework to probe nontrivial regimes of spiking correlations and transition rates between metastable neural states.

SeminarNeuroscienceRecording

Timescales of neural activity: their inference, control, and relevance

Anna Levina
Universität Tübingen
May 3, 2022

Timescales characterize how fast the observables change in time. In neuroscience, they can be estimated from the measured activity and can be used, for example, as a signature of the memory trace in the network. I will first discuss the inference of the timescales from the neuroscience data comprised of the short trials and introduce a new unbiased method. Then, I will apply the method to the data recorded from a local population of cortical neurons from the visual area V4. I will demonstrate that the ongoing spiking activity unfolds across at least two distinct timescales - fast and slow - and the slow timescale increases when monkeys attend to the location of the receptive field. Which models can give rise to such behavior? Random balanced networks are known for their fast timescales; thus, a change in the neurons or network properties is required to mimic the data. I will propose a set of models that can control effective timescales and demonstrate that only the model with strong recurrent interactions fits the neural data. Finally, I will discuss the timescales' relevance for behavior and cortical computations.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 22, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscienceRecording

NMC4 Short Talk: A mechanism for inter-areal coherence through communication based on connectivity and oscillatory power

Marius Schneider
Ernst Strüngmann Institute for Neuroscience
Nov 30, 2021

Inter-areal coherence between cortical field-potentials is a widespread phenomenon and depends on numerous behavioral and cognitive factors. It has been hypothesized that inter-areal coherence reflects phase-synchronization between local oscillations and flexibly gates communication. We reveal an alternative mechanism, where coherence results from and is not the cause of communication, and naturally emerges as a consequence of the fact that spiking activity in a sending area causes post-synaptic inputs both in the same area and in other areas. Consequently, coherence depends in a lawful manner on oscillatory power and phase-locking in a sending area and inter-areal connectivity. We show that changes in oscillatory power explain prominent changes in fronto-parietal beta-coherence with movement and memory, and LGN-V1 gamma-coherence with arousal and visual stimulation. Optogenetic silencing of a receiving area and E/I network simulations demonstrate that afferent synaptic inputs rather than spiking entrainment are the main determinant of inter-areal coherence. These findings suggest that the unique spectral profiles of different brain areas automatically give rise to large-scale inter-areal coherence patterns that follow anatomical connectivity and continuously reconfigure as a function of behavior and cognition.

SeminarNeuroscienceRecording

NMC4 Keynote: Latent variable modeling of neural population dynamics - where do we go from here?

Chethan Pandarinath
Georgia Tech & Emory University
Nov 30, 2021

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present machine learning frameworks that enable inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales, from diverse brain areas, and without regard to behavior. I will then demonstrate extensions that allow recovery of dynamics from two-photon calcium imaging data with surprising precision. Finally, I will discuss our efforts to facilitate comparisons within our field by curating datasets and standardizing model evaluation, including a currently active modeling challenge, the 2021 Neural Latents Benchmark [neurallatents.github.io].

SeminarNeuroscienceRecording

Cellular mechanisms behind stimulus evoked quenching of variability

Brent Doiron
University of Chicago
Jan 26, 2021

A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses. This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network, quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified. This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability. We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability. The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor. Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.

SeminarNeuroscience

Towards multipurpose biophysics-based mathematical models of cortical circuits

Gaute Einevoll
Norwegian University of Life Sciences
Oct 13, 2020

Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.

ePoster

Predictability in the spiking activity of mouse visual cortex decreases along the processing hierarchy

COSYNE 2022

ePoster

Predictability in the spiking activity of mouse visual cortex decreases along the processing hierarchy

COSYNE 2022

ePoster

State-dependent mapping of correlations of subthreshold to spiking activity is expansive in L1 inhibitory circuits

Christoph Miehl, Yitong Qi, Adam Cohen, Brent Doiron

COSYNE 2025

ePoster

Functional connectivity of in-vitro neuronal spiking activity during rest and gameplay

Alon Loeffler, Forough Habibollahi, Moein Khajehnejad, Adeel Razi, Brett Kagan

FENS Forum 2024

ePoster

Investigating neural mesoscale signal complexity at different stages of consciousness: How to predict local field potential from spiking activity

Sofia Raglio, Giampiero Bardella, Camille Mazzara, Andrea Galluzzi, Maurizio Mattia, Stefano Ferraina

FENS Forum 2024