TopicNeuro

firing rate

37 Seminars4 ePosters

Latest

SeminarNeuroscience

Roles of inhibition in stabilizing and shaping the response of cortical networks

Nicolas Brunel
Duke University
Apr 5, 2024

Inhibition has long been thought to stabilize the activity of cortical networks at low rates, and to shape significantly their response to sensory inputs. In this talk, I will describe three recent collaborative projects that shed light on these issues. (1) I will show how optogenetic excitation of inhibition neurons is consistent with cortex being inhibition stabilized even in the absence of sensory inputs, and how this data can constrain the coupling strengths of E-I cortical network models. (2) Recent analysis of the effects of optogenetic excitation of pyramidal cells in V1 of mice and monkeys shows that in some cases this optogenetic input reshuffles the firing rates of neurons of the network, leaving the distribution of rates unaffected. I will show how this surprising effect can be reproduced in sufficiently strongly coupled E-I networks. (3) Another puzzle has been to understand the respective roles of different inhibitory subtypes in network stabilization. Recent data reveal a novel, state dependent, paradoxical effect of weakening AMPAR mediated synaptic currents onto SST cells. Mathematical analysis of a network model with multiple inhibitory cell types shows that this effect tells us in which conditions SST cells are required for network stabilization.

SeminarNeuroscience

Identifying mechanisms of cognitive computations from spikes

Tatiana Engel
Princeton
Nov 3, 2023

Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.

SeminarNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale
Princeton
May 3, 2023

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

SeminarNeuroscienceRecording

Extrinsic control and intrinsic computation in the hippocampal CA1 network

Ipshita Zutshi
Buzsáki Lab, NYU
Jul 6, 2022

A key issue in understanding circuit operations is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Several studies have lesioned or silenced inputs to area CA1 of the hippocampus - either area CA3 or the entorhinal cortex and examined the effect on CA1 pyramidal cells. However, the types of the reported physiological impairments vary widely, primarily because simultaneous manipulations of these redundant inputs have never been performed. In this study, I combined optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of CA3. I combined this with high spatial resolution extracellular recordings along the CA1-dentate axis. Silencing the medial entorhinal largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields. In contrast to these results, unilateral mEC manipulations that were ineffective in impacting place cells during awake behavior were found to alter sharp-wave ripple sequences activated during sleep. Thus, intrinsic excitatory-inhibitory circuits within CA1 can generate neuronal assemblies in the absence of external inputs, although external synaptic inputs are critical to reconfigure (remap) neuronal assemblies in a brain-state dependent manner.

SeminarNeuroscienceRecording

Time as a continuous dimension in natural and artificial networks

Marc Howard
Boston University
May 4, 2022

Neural representations of time are central to our understanding of the world around us. I review cognitive, neurophysiological and theoretical work that converges on three simple ideas. First, the time of past events is remembered via populations of neurons with a continuum of functional time constants. Second, these time constants evenly tile the log time axis. This results in a neural Weber-Fechner scale for time which can support behavioral Weber-Fechner laws and characteristic behavioral effects in memory experiments. Third, these populations appear as dual pairs---one type of population contains cells that change firing rate monotonically over time and a second type of population that has circumscribed temporal receptive fields. These ideas can be used to build artificial neural networks that have novel properties. Of particular interest, a convolutional neural network built using these principles can generalize to arbitrary rescaling of its inputs. That is, after learning to perform a classification task on a time series presented at one speed, it successfully classifies stimuli presented slowed down or sped up. This result illustrates the point that this confluence of ideas originating in cognitive psychology and measured in the mammalian brain could have wide-reaching impacts on AI research.

SeminarNeuroscience

Extrinsic control and autonomous computation in the hippocampal CA1 circuit

Ipshita Zutshi
NYU
Apr 27, 2022

In understanding circuit operations, a key issue is the extent to which neuronal spiking reflects local computation or responses to upstream inputs. Because pyramidal cells in CA1 do not have local recurrent projections, it is currently assumed that firing in CA1 is inherited from its inputs – thus, entorhinal inputs provide communication with the rest of the neocortex and the outside world, whereas CA3 inputs provide internal and past memory representations. Several studies have attempted to prove this hypothesis, by lesioning or silencing either area CA3 or the entorhinal cortex and examining the effect of firing on CA1 pyramidal cells. Despite the intense and careful work in this research area, the magnitudes and types of the reported physiological impairments vary widely across experiments. At least part of the existing variability and conflicts is due to the different behavioral paradigms, designs and evaluation methods used by different investigators. Simultaneous manipulations in the same animal or even separate manipulations of the different inputs to the hippocampal circuits in the same experiment are rare. To address these issues, I used optogenetic silencing of unilateral and bilateral mEC, of the local CA1 region, and performed bilateral pharmacogenetic silencing of the entire CA3 region. I combined this with high spatial resolution recording of local field potentials (LFP) in the CA1-dentate axis and simultaneously collected firing pattern data from thousands of single neurons. Each experimental animal had up to two of these manipulations being performed simultaneously. Silencing the medial entorhinal (mEC) largely abolished extracellular theta and gamma currents in CA1, without affecting firing rates. In contrast, CA3 and local CA1 silencing strongly decreased firing of CA1 neurons without affecting theta currents. Each perturbation reconfigured the CA1 spatial map. Yet, the ability of the CA1 circuit to support place field activity persisted, maintaining the same fraction of spatially tuned place fields, and reliable assembly expression as in the intact mouse. Thus, the CA1 network can maintain autonomous computation to support coordinated place cell assemblies without reliance on its inputs, yet these inputs can effectively reconfigure and assist in maintaining stability of the CA1 map.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 23, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscience

Keeping your Brain in Balance: the Ups and Downs of Homeostatic Plasticity (virtual)

Gina Turrigiano, PhD
Professor, Department of Biology, Brandeis University, USA
Feb 17, 2022

Our brains must generate and maintain stable activity patterns over decades of life, despite the dramatic changes in circuit connectivity and function induced by learning and experience-dependent plasticity. How do our brains acheive this balance between opposing need for plasticity and stability? Over the past two decades, we and others have uncovered a family of “homeostatic” negative feedback mechanisms that are theorized to stabilize overall brain activity while allowing specific connections to be reconfigured by experience. Here I discuss recent work in which we demonstrate that individual neocortical neurons in freely behaving animals indeed have a homeostatic activity set-point, to which they return in the face of perturbations. Intriguingly, this firing rate homeostasis is gated by sleep/wake states in a manner that depends on the direction of homeostatic regulation: upward-firing rate homeostasis occurs selectively during periods of active wake, while downward-firing rate homeostasis occurs selectively during periods of sleep, suggesting that an important function of sleep is to temporally segregate bidirectional plasticity. Finally, we show that firing rate homeostasis is compromised in an animal model of autism spectrum disorder. Together our findings suggest that loss of homeostatic plasticity in some neurological disorders may render central circuits unable to compensate for the normal perturbations induced by development and learning.

SeminarNeuroscienceRecording

NaV Long-term Inactivation Regulates Adaptation in Place Cells and Depolarization Block in Dopamine Neurons

Carmen Canavier
LSU Health Sciences Center, New Orleans
Feb 9, 2022

In behaving rodents, CA1 pyramidal neurons receive spatially-tuned depolarizing synaptic input while traversing a specific location within an environment called its place. Midbrain dopamine neurons participate in reinforcement learning, and bursts of action potentials riding a depolarizing wave of synaptic input signal rewards and reward expectation. Interestingly, slice electrophysiology in vitro shows that both types of cells exhibit a pronounced reduction in firing rate (adaptation) and even cessation of firing during sustained depolarization. We included a five state Markov model of NaV1.6 (for CA1) and NaV1.2 (for dopamine neurons) respectively, in computational models of these two types of neurons. Our simulations suggest that long-term inactivation of this channel is responsible for the adaptation in CA1 pyramidal neurons, in response to triangular depolarizing current ramps. We also show that the differential contribution of slow inactivation in two subpopulations of midbrain dopamine neurons can account for their different dynamic ranges, as assessed by their responses to similar depolarizing ramps. These results suggest long-term inactivation of the sodium channel is a general mechanism for adaptation.

SeminarNeuroscienceRecording

The GluN2A Subunit of the NMDA Receptor and Parvalbumin Interneurons: A Possible Role in Interneuron Development

Steve Traynelis & Chad Camp
Emory University School of Medicine
Jan 19, 2022

N-methyl-D-aspartate receptors (NMDARs) are excitatory glutamate-gated ion channels that are expressed throughout the central nervous system. NMDARs mediate calcium entry into cells, and are involved in a host of neurological functions. The GluN2A subunit, encoded by the GRIN2A gene, is expressed by both excitatory and inhibitory neurons, with well described roles in pyramidal cells. By using Grin2a knockout mice, we show that the loss of GluN2A signaling impacts parvalbumin-positive (PV) GABAergic interneuron function in hippocampus. Grin2a knockout mice have 33% more PV cells in CA1 compared to wild type but similar cholecystokinin-positive cell density. Immunohistochemistry and electrophysiological recordings show that excess PV cells do eventually incorporate into the hippocampal network and participate in phasic inhibition. Although the morphology of Grin2a knockout PV cells is unaffected, excitability and action-potential firing properties show age-dependent alterations. Preadolescent (P20-25) PV cells have an increased input resistance, longer membrane time constant, longer action-potential half-width, a lower current threshold for depolarization-induced block of action-potential firing, and a decrease in peak action-potential firing rate. Each of these measures are corrected in adulthood, reaching wild type levels, suggesting a potential delay of electrophysiological maturation. The circuit and behavioral implications of this age-dependent PV interneuron malfunction are unknown. However, neonatal Grin2a knockout mice are more susceptible to lipopolysaccharide and febrile-induced seizures, consistent with a critical role for early GluN2A signaling in development and maintenance of excitatory-inhibitory balance. These results could provide insights into how loss-of-function GRIN2A human variants generate an epileptic phenotypes.

SeminarNeuroscienceRecording

Response of cortical networks to optogenetic stimulation: Experiment vs. theory

Nicolas Brunel
Duke University
Jan 19, 2022

Optogenetics is a powerful tool that allows experimentalists to perturb neural circuits. What can we learn about a network from observing its response to perturbations? I will first describe the results of optogenetic activation of inhibitory neurons in mice cortex, and show that the results are consistent with inhibition stabilization. I will then move to experiments in which excitatory neurons are activated optogenetically, with or without visual inputs, in mice and monkeys. In some conditions, these experiments show a surprising result that the distribution of firing rates is not significantly changed by stimulation, even though firing rates of individual neurons are strongly modified. I will show in which conditions a network model of excitatory and inhibitory neurons can reproduce this feature.

SeminarNeuroscience

The circadian clock and neural circuits maintaining body fluid homeostasis

Charles BOURQUE
Professor, Department of Neurology-Neurosurgery, McGill University
Jan 10, 2022

Neurons in the suprachiasmatic nucleus (SCN, the brain’s master circadian clock) display a 24 hour cycle in the their rate of action potential discharge whereby firing rates are high during the light phase and lower during the dark phase. Although it is generally agreed that this cycle of activity is a key mediator of the clock’s neural and humoral output, surprisingly little is known about how changes in clock electrical activity can mediate scheduled physiological changes at different times of day. Using opto- and chemogenetic approaches in mice we have shown that the onset of electrical activity in vasopressin releasing SCN neurons near Zeitgeber time 22 (ZT22) activates glutamatergic thirst-promoting neurons in the OVLT (organum vasculosum lamina terminalis) to promote water intake prior to sleep. This effect is mediated by activity-dependent release of vasopressin from the axon terminals of SCN neurons which acts as a neurotransmitter on OVLT neurons. More recently we found that the clock receives excitatory input from a different subset of sodium sensing neurons in the OVLT. Activation of these neurons by a systemic salt load delivered at ZT19 stimulated the electrical activity of SCN neurons which are normally silent at this time. Remarkably, this effect induced an acute reduction in non-shivering thermogenesis and body temperature, which is an adaptive response to the salt load. These findings provide information regarding the mechanisms by which the SCN promotes scheduled physiological rhythms and indicates that the clock’s output circuitry can also be recruited to mediate an unscheduled homeostatic response.

SeminarNeuroscience

A nonlinear shot noise model for calcium-based synaptic plasticity

Bin Wang
Aljadeff lab, University of California San Diego, USA
Dec 9, 2021

Activity dependent synaptic plasticity is considered to be a primary mechanism underlying learning and memory. Yet it is unclear whether plasticity rules such as STDP measured in vitro apply in vivo. Network models with STDP predict that activity patterns (e.g., place-cell spatial selectivity) should change much faster than observed experimentally. We address this gap by investigating a nonlinear calcium-based plasticity rule fit to experiments done in physiological conditions. In this model, LTP and LTD result from intracellular calcium transients arising almost exclusively from synchronous coactivation of pre- and postsynaptic neurons. We analytically approximate the full distribution of nonlinear calcium transients as a function of pre- and postsynaptic firing rates, and temporal correlations. This analysis directly relates activity statistics that can be measured in vivo to the changes in synaptic efficacy they cause. Our results highlight that both high-firing rates and temporal correlations can lead to significant changes to synaptic efficacy. Using a mean-field theory, we show that the nonlinear plasticity rule, without any fine-tuning, gives a stable, unimodal synaptic weight distribution characterized by many strong synapses which remain stable over long periods of time, consistent with electrophysiological and behavioral studies. Moreover, our theory explains how memories encoded by strong synapses can be preferentially stabilized by the plasticity rule. We confirmed our analytical results in a spiking recurrent network. Interestingly, although most synapses are weak and undergo rapid turnover, the fraction of strong synapses are sufficient for supporting realistic spiking dynamics and serve to maintain the network’s cluster structure. Our results provide a mechanistic understanding of how stable memories may emerge on the behavioral level from an STDP rule measured in physiological conditions. Furthermore, the plasticity rule we investigate is mathematically equivalent to other learning rules which rely on the statistics of coincidences, so we expect that our formalism will be useful to study other learning processes beyond the calcium-based plasticity rule.

SeminarNeuroscienceRecording

NMC4 Short Talk: Stretching and squeezing of neuronal log firing rate distribution by psychedelic and intrinsic brain state transitions

Bradley Dearnly
University of Sheffield
Dec 1, 2021

How psychedelic drugs change the activity of cortical neuronal populations is not well understood. It is also not clear which changes are specific to transition into the psychedelic brain state and which are shared with other brain state transitions. Here, we used Neuropixels probes to record from large populations of neurons in prefrontal cortex of mice given the psychedelic drug TCB-2. The primary effect of drug ingestion was stretching of the distribution of log firing rates of the recorded population. This phenomenon was previously observed across transitions between sleep and wakefulness, which prompted us to examine how common it is. We found that modulation of the width of the log-rate distribution of a neuronal population occurred in multiple areas of the cortex and in the hippocampus even in awake drug-free mice, driven by intrinsic fluctuations in their arousal level. Arousal, however, did not explain the stretching of the log-rate distribution by TCB-2. In both psychedelic and intrinsically occurring brain state transitions, the stretching or squeezing of the log-rate distribution of an entire neuronal population were the result of a more close overlap between log-rate distributions of the upregulated and downregulated subpopulations in one brain state compared to the other brain state. Often, we also observed that the log-rate distribution of the downregulated subpopulation was stretched, whereas the log-rate distribution of the upregulated subpopulation was squeezed. In both subpopulations, the stretching and squeezing were a signature of a greater relative impact of the brain state transition on the rates of the slow-firing neurons. These findings reveal a generic pattern of reorganisation of neuronal firing rates by different kinds of brain state transitions.

SeminarNeuroscienceRecording

NMC4 Short Talk: Decoding finger movements from human posterior parietal cortex

Charles Guan
California Institute of Technology
Dec 1, 2021

Restoring hand function is a top priority for individuals with tetraplegia. This challenge motivates considerable research on brain-computer interfaces (BCIs), which bypass damaged neural pathways to control paralyzed or prosthetic limbs. Here, we demonstrate the BCI control of a prosthetic hand using intracortical recordings from the posterior parietal cortex (PPC). As part of an ongoing clinical trial, two participants with cervical spinal cord injury were each implanted with a 96-channel array in the left PPC. Across four sessions each, we recorded neural activity while they attempted to press individual fingers of the contralateral (right) hand. Single neurons modulated selectively for different finger movements. Offline, we accurately classified finger movements from neural firing rates using linear discriminant analysis (LDA) with cross-validation (accuracy = 90%; chance = 17%). Finally, the participants used the neural classifier online to control all five fingers of a BCI hand. Online control accuracy (86%; chance = 17%) exceeded previous state-of-the-art finger BCIs. Furthermore, offline, we could classify both flexion and extension of the right fingers, as well as flexion of all ten fingers. Our results indicate that neural recordings from PPC can be used to control prosthetic fingers, which may help contribute to a hand restoration strategy for people with tetraplegia.

SeminarNeuroscienceRecording

NMC4 Short Talk: An optogenetic theory of stimulation near criticality

Brandon Benson
Stanford University
Dec 1, 2021

Recent advances in optogenetics allow for stimulation of neurons with sub-millisecond spike jitter and single neuron selectivity. Already this precision has revealed new levels of cortical sensitivity: stimulating tens of neurons can yield changes in the mean firing rate of thousands of similarly tuned neurons. This extreme sensitivity suggests that cortical dynamics are near criticality. Criticality is often studied in neural systems as a non-equilibrium thermodynamic process in which scale-free patterns of activity, called avalanches, emerge between distinct states of spontaneous activity. While criticality is well studied, it is still unclear what these distinct states of spontaneous activity are and what responses we expect from stimulation of this activity. By answering these questions, optogenetic stimulation will become a new avenue for approaching criticality and understanding cortical dynamics. Here, for the first time, we study the effects of optogenetic-like stimulation on a model near criticality. We study a model of Inhibitory/Excitatory (I/E) Leaky Integrate and Fire (LIF) spiking neurons which display a region of high sensitivity as seen in experiments. We find that this region of sensitivity is, indeed, near criticality. We derive the Dynamic Mean Field Theory of this model and find that the distinct states of activity are asynchrony and synchrony. We use our theory to characterize response to various types and strengths of optogenetic stimulation. Our model and theory predict that asynchronous, near-critical dynamics can have two qualitatively different responses to stimulation: one characterized by high sensitivity, discrete event responses, and high trial-to-trial variability, and another characterized by low sensitivity, continuous responses with characteristic frequencies, and low trial-to-trial variability. While both response types may be considered near-critical in model space, networks which are closest to criticality show a hybrid of these response effects.

SeminarNeuroscienceRecording

NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing

Nick Sexton (he/him)
University College London
Dec 1, 2021

A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.

SeminarNeuroscienceRecording

StereoSpike: Depth Learning with a Spiking Neural Network

Ulysse Rancon
University of Bordeaux
Nov 2, 2021

Depth estimation is an important computer vision task, useful in particular for navigation in autonomous vehicles, or for object manipulation in robotics. Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike. More specifically, we used the Multi Vehicle Stereo Event Camera Dataset (MVSEC). It provides a depth ground-truth, which was used to train StereoSpike in a supervised manner, using surrogate gradient descent. We propose a novel readout paradigm to obtain a dense analog prediction –the depth of each pixel– from the spikes of the decoder. We demonstrate that this architecture generalizes very well, even better than its non-spiking counterparts, leading to state-of-the-art test accuracy. To the best of our knowledge, it is the first time that such a large-scale regression problem is solved by a fully spiking network. Finally, we show that low firing rates (<10%) can be obtained via regularization, with a minimal cost in accuracy. This means that StereoSpike could be implemented efficiently on neuromorphic chips, opening the door for low power real time embedded systems.

SeminarNeuroscience

A universal probabilistic spike count model reveals ongoing modulation of neural variability in head direction cell activity in mice

David Liu
University of Cambridge
Oct 27, 2021

Neural responses are variable: even under identical experimental conditions, single neuron and population responses typically differ from trial to trial and across time. Recent work has demonstrated that this variability has predictable structure, can be modulated by sensory input and behaviour, and bears critical signatures of the underlying network dynamics and computations. However, current methods for characterising neural variability are primarily geared towards sensory coding in the laboratory: they require trials with repeatable experimental stimuli and behavioural covariates. In addition, they make strong assumptions about the parametric form of variability, rely on assumption-free but data-inefficient histogram-based approaches, or are altogether ill-suited for capturing variability modulation by covariates. Here we present a universal probabilistic spike count model that eliminates these shortcomings. Our method uses scalable Bayesian machine learning techniques to model arbitrary spike count distributions (SCDs) with flexible dependence on observed as well as latent covariates. Without requiring repeatable trials, it can flexibly capture covariate-dependent joint SCDs, and provide interpretable latent causes underlying the statistical dependencies between neurons. We apply the model to recordings from a canonical non-sensory neural population: head direction cells in the mouse. We find that variability in these cells defies a simple parametric relationship with mean spike count as assumed in standard models, its modulation by external covariates can be comparably strong to that of the mean firing rate, and slow low-dimensional latent factors explain away neural correlations. Our approach paves the way to understanding the mechanisms and computations underlying neural variability under naturalistic conditions, beyond the realm of sensory coding with repeatable stimuli.

SeminarNeuroscienceRecording

Top-down modulation of the retinal code via histaminergic neurons in the hypothalamus

Michal Rivlin
Weismann Institute
Oct 18, 2021

The mammalian retina is considered an autonomous neuronal tissue, yet there is evidence that it receives inputs from the brain in the form of retinopetal axons. A sub-population of these axons was suggested to belong to histaminergic neurons located in the tuberomammillarynucleus (TMN) of the hypothalamus. Using viral injections to the TMN, we identified these retinopetal axons and found that although few in number, they extensively branch to cover a large portion of the retina. Using Ca2+ imaging and electrophysiology, we show that histamine application increases spontaneous firing rates and alters the light responses of a significant portion of retinal ganglion cells (RGCs). Direct activation of the histaminergic axons also induced significant changes in RGCs activity. Since activity in the TMN was shown to correlate with arousal state, our data suggest the retinal code may change with the animal's behavioral state through the release of histamine from TMN histaminergic neurons.

SeminarNeuroscienceRecording

Context-Dependent Relationships between Locus Coeruleus Firing Patterns and Coordinated Neural Activity in the Anterior Cingulate Cortex

Siddhartha Joshi
Baylor College of Medicine
Oct 8, 2021

Ascending neuromodulatory projections from the locus coeruleus (LC) affect cortical neural networks via the release of norepinephrine (NE). However, the exact nature of these neuromodulatory effects on neural activity patterns in vivo is not well understood. Here we show that in awake monkeys, LC activation is associated with changes in coordinated activity patterns in the anterior cingulate cortex (ACC). These relationships, which are largely independent of changes in firing rates of individual ACC neurons, depend on the type of LC activation: ACC pairwise correlations tend to be reduced when tonic (baseline) LC activity increases but are enhanced when external events drive phasic LC responses. Both relationships covary with pupil changes that reflect LC activation and arousal. These results suggest that modulations of information processing that reflect changes in coordinated activity patterns in cortical networks can result partly from ongoing, context-dependent, arousal-related changes in activation of the LC-NE system.

SeminarNeuroscienceRecording

Adaptation-driven sensory detection and sequence memory

André Longtin
University of Ottawa
Oct 6, 2021

Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds. We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus.

SeminarNeuroscience

Population dynamics of the thalamic head direction system during drift and reorientation

Zaki Ajabi
McGill University
Oct 4, 2021

The head direction (HD) system is classically modeled as a ring attractor network which ensures a stable representation of the animal’s head direction. This unidimensional description popularized the view of the HD system as the brain’s internal compass. However, unlike a globally consistent magnetic compass, the orientation of the HD system is dynamic, depends on local cues and exhibits remapping across familiar environments5. Such a system requires mechanisms to remember and align to familiar landmarks, which may not be well described within the classic 1-dimensional framework. To search for these mechanisms, we performed large population recordings of mouse thalamic HD cells using calcium imaging, during controlled manipulations of a visual landmark in a familiar environment. First, we find that realignment of the system was associated with a continuous rotation of the HD network representation. The speed and angular distance of this rotation was predicted by a 2nd dimension to the ring attractor which we refer to as network gain, i.e. the instantaneous population firing rate. Moreover, the 360-degree azimuthal profile of network gain, during darkness, maintained a ‘memory trace’ of a previously displayed visual landmark. In a 2nd experiment, brief presentations of a rotated landmark revealed an attraction of the network back to its initial orientation, suggesting a time-dependent mechanism underlying the formation of these network gain memory traces. Finally, in a 3rd experiment, continuous rotation of a visual landmark induced a similar rotation of the HD representation which persisted following removal of the landmark, demonstrating that HD network orientation is subject to experience-dependent recalibration. Together, these results provide new mechanistic insights into how the neural compass flexibly adapts to environmental cues to maintain a reliable representation of the head direction.

SeminarNeuroscienceRecording

Combining two mechanisms to produce neural firing rate homeostasis

Paul Miller
Brandeis University
Jun 11, 2021

The typical goal of homeostatic mechanisms is to ensure a system operates at or in the vicinity of a stable set point, where a particular measure is relatively constant and stable. Neural firing rate homeostasis is unusual in that a set point of fixed firing rate is at odds with the goal of a neuron to convey information, or produce timed motor responses, which require temporal variations in firing rate. Therefore, for a neuron, a range of firing rates is required for optimal function, which could, for example, be set by a dual system that controls both mean and variance of firing rate. We explore, both via simulations and analysis, how two experimentally measured mechanisms for firing rate homeostasis can cooperate to improve information processing and avoid the pitfall of pulling in different directions when their set points do not appear to match.

SeminarNeuroscience

Firing Rate Homeostasis in Neural Circuits: From basic principles to malfunctions

Inna Slutsky
Tel Aviv University
Jun 3, 2021

Maintaining average activity level within a set-point range constitutes a fundamental property of central neural circuits. Accumulated evidence suggests that firing rate distributions and their means represent physiological variables regulated by homeostatic systems during sleep-wake cycle in central neural circuits. While intracellular Ca2+ has long been hypothesized as a feedback control signal, the source of Ca2+ and the molecular machinery enabling network-wide homeostatic responses remain largely unknown. I will present our hypothesis and framework on identifying homeostatic regulators in neural circuits. Next, I will show our new results on the role of mitochondria in the regulation of activity set-points and feedback responses. Finally, I will provide an evidence on state-dependent dysregulation of activity set-points at the presymptomatic disease stage in familial Alzheimer’s models.

SeminarNeuroscience

Co-tuned, balanced excitation and inhibition in olfactory memory networks

Claire Meissner-Bernard
Friedrich lab, Friedrich Miescher Institute, Basel, Switzerland
May 20, 2021

Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.

SeminarNeuroscience

Human Single-Neuron recordings reveal neuronal mechanisms of Working Memory

Jan Kamiński
Nencki Institute of Experimental Biology
Mar 17, 2021

Working memory (WM) is a fundamental human cognitive capacity that allows us to maintain and manipulate information stored for a short period of time in an active form. Thanks to a unique opportunity to record activity of neurons in humans during epilepsy monitoring we could test neuronal mechanisms of this cognitive capacity. We showed that firing rate of image selective neurons in Medial Temporal Lobe persists through maintenance periods of working memory task. This activity was behaviorally relevant and formed attractors in its state-space. Furthermore, we showed that firing rate of those neurons phase lock to ongoing slow-frequency oscillations. The properties of phase locking are dependent on memory content and load. During high memory loads, the phase of the oscillatory activity to which neurons phase lock provides information about memory content not available in the firing rate of the neurons.

SeminarNeuroscience

Firing Homeostasis in Neural Circuits: From Basic Principles to Malfunctions

Inna Slutsky
Tel Aviv University
Feb 19, 2021

Neural circuit functions are stabilized by homeostatic mechanisms at long timescales in response to changes in experience and learning. However, we still do not know which specific physiological variables are being stabilized, nor which cellular or neural-network components comprise the homeostatic machinery. At this point, most evidence suggests that the distribution of firing rates amongst neurons in a brain circuit is the key variable that is maintained around a circuit-specific set-point value in a process called firing rate homeostasis. Here, I will discuss our recent findings that implicate mitochondria as a central player in mediating firing rate homeostasis and its impairments. While mitochondria are known to regulate neuronal variables such as synaptic vesicle release or intracellular calcium concentration, we searched for the mitochondrial signaling pathways that are essential for homeostatic regulation of firing rates. We utilize basic concepts of control theory to build a framework for classifying possible components of the homeostatic machinery in neural networks. This framework may facilitate the identification of new homeostatic pathways whose malfunctions drive instability of neural circuits in distinct brain disorders.

SeminarNeuroscienceRecording

Experience-dependent remapping of temporal encoding by striatal ensembles

Austin Bruce
University of Iowa, USA
Feb 17, 2021

Medium-spiny neurons (MSNs) in the striatum are required for interval timing, or the estimation of the time over several seconds via a motor response. We and others have shown that striatal MSNs can encode the duration of temporal intervals via time-dependent ramping activity, progressive monotonic changes in firing rate preceding behaviorally salient points in time. Here, we investigated how timing-related activity within striatal ensembles changes with experience. We leveraged a rodent-optimized interval timing task in which mice ‘switch’ response ports after an amount of time has passed without reward. We report three main results. First, we found that the proportion of MSNs exhibiting time-dependent modulations of firing rate increased after 10 days of task overtraining. Second, temporal decoding by MSN ensembles increased with experience and was largely driven by time-related ramping activity. Finally, we found that time-related ramping activity generalized across both correct and error trials. These results enhance our understanding of striatal temporal processing by demonstrating that time-dependent activity within MSN ensembles evolves with experience and is dissociable from motor- and reward-related processes.

SeminarNeuroscienceRecording

Inhibitory neural circuit mechanisms underlying neural coding of sensory information in the neocortex

Jeehyun Kwag
Korea University
Jan 29, 2021

Neural codes, such as temporal codes (precisely timed spikes) and rate codes (instantaneous spike firing rates), are believed to be used in encoding sensory information into spike trains of cortical neurons. Temporal and rate codes co-exist in the spike train and such multiplexed neural code-carrying spike trains have been shown to be spatially synchronized in multiple neurons across different cortical layers during sensory information processing. Inhibition is suggested to promote such synchronization, but it is unclear whether distinct subtypes of interneurons make different contributions in the synchronization of multiplexed neural codes. To test this, in vivo single-unit recordings from barrel cortex were combined with optogenetic manipulations to determine the contributions of parvalbumin (PV)- and somatostatin (SST)-positive interneurons to synchronization of precisely timed spike sequences. We found that PV interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are low (<12 Hz), whereas SST interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are high (>12 Hz). Furthermore, using a computational model, we demonstrate that these effects can be explained by PV and SST interneurons having preferential contribution to feedforward and feedback inhibition, respectively. Overall, these results show that PV and SST interneurons have distinct frequency (rate code)-selective roles in dynamically gating the synchronization of spike times (temporal code) through preferentially recruiting feedforward and feedback inhibitory circuit motifs. The inhibitory neural circuit mechanisms we uncovered here his may have critical roles in regulating neural code-based somatosensory information processing in the neocortex.

SeminarNeuroscienceRecording

Cellular mechanisms behind stimulus evoked quenching of variability

Brent Doiron
University of Chicago
Jan 27, 2021

A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses. This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network, quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified. This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability. We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability. The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor. Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.

SeminarNeuroscienceRecording

The emergence of contrast invariance in cortical circuits

Tatjana Tchumatchenko
Max Planck Institute for Brain Research
Nov 13, 2020

Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.

SeminarNeuroscienceRecording

A robust neural integrator based on the interactions of three time scales

Bard Ermentrout
University of Pittsburgh
Nov 11, 2020

Neural integrators are circuits that are able to code analog information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights. In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in recordings of animals. I show the mechanism for this can be explained by the instability regions of the Mathieu equation. I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.

SeminarNeuroscience

Plasticity in hypothalamic circuits for oxytocin release

Silvana Valtcheva
NYU
Oct 21, 2020

Mammalian babies are “sensory traps” for parents. Various sensory cues from the newborn are tremendously efficient in triggering parental responses in caregivers. We recently showed that core aspects of maternal behavior such as pup retrieval in response to infant vocalizations rely on active learning of auditory cues from pups facilitated by the neurohormone oxytocin (OT). Release of OT from the hypothalamus might thus help induce recognition of different infant cues but it is unknown what sensory stimuli can activate OT neurons. I performed unprecedented in vivo whole-cell and cell-attached recordings from optically-identified OT neurons in awake dams. I found that OT neurons, but not other hypothalamic cells, increased their firing rate after playback of pup distress vocalizations. Using anatomical tracing approaches and channelrhodopsin-assisted circuit mapping, I identified the projections and brain areas (including inferior colliculus, auditory cortex, and posterior intralaminar thalamus) relaying auditory information about social sounds to OT neurons. In hypothalamic brain slices, when optogenetically stimulating thalamic afferences to mimic high-frequency thalamic discharge, observed in vivo during pup calls playback, I found that thalamic activity led to long-term depression of synaptic inhibition in OT neurons. This was mediated by postsynaptic NMDARs-induced internalization of GABAARs. Therefore, persistent activation of OT neurons following pup calls in vivo is likely mediated by disinhibition. This gain modulation of OT neurons by infant cries, may be important for sustaining motivation. Using a genetically-encoded OT sensor, I demonstrated that pup calls were efficient in triggering OT release in downstream motivational areas. When thalamus projections to hypothalamus were inhibited with chemogenetics, dams exhibited longer latencies to retrieve crying pups, suggesting that the thalamus-hypothalamus noncanonical auditory pathway may be a specific circuit for the detection of social sounds, important for disinhibiting OT neurons, gating OT release in downstream brain areas, and speeding up maternal behavior.

SeminarNeuroscience

Autism-Associated Shank3 Is Essential for Homeostatic Compensation in Rodent Visual Cortex

Gina Turrigiano
Brandeis University
Jul 21, 2020

Neocortical networks must generate and maintain stable activity patterns despite perturbations induced by learning and experience- dependent plasticity. There is abundant theoretical and experimental evidence that network stability is achieved through homeostatic plasticity mechanisms that adjust synaptic and neuronal properties to stabilize some measure of average activity, and this process has been extensively studied in primary visual cortex (V1), where chronic visual deprivation induces an initial drop in activity and ensemble average firing rates (FRs), but over time activity is restored to baseline despite continued deprivation. Here I discuss recent work from the lab in which we followed this FR homeostasis in individual V1 neurons in freely behaving animals during a prolonged visual deprivation/eye-reopening paradigm. We find that - when FRs are perturbed by manipulating sensory experience - over time they return precisely to a cell-autonomous set-point. Finally, we find that homeostatic plasticity is perturbed in a mouse model of Autism spectrum disorder, and this results in a breakdown of FRH within V1. These data suggest that loss of homeostatic plasticity is one primary cause of excitation/inhibition imbalances in ASD models. Together these studies illuminate the role of stabilizing plasticity mechanisms in the ability of neocortical circuits to recover robust function following challenges to their excitability.

SeminarNeuroscience

Cholinergic regulation of learning in the olfactory system

Christiane Linster
Cornell University
Jul 9, 2020

In the olfactory system, cholinergic modulation has been associated with contrast modulation and changes in receptive fields in the olfactory bulb, as well the learning of odor associations in the olfactory cortex. Computational modeling and behavioral studies suggest that cholinergic modulation could improve sensory processing and learning while preventing pro-active interference when task demands are high. However, how sensory inputs and/or learning regulate incoming modulation has not yet been elucidated. We here use a computational model of the olfactory bulb, piriform cortex (PC) and horizontal limb of the diagonal band of Broca (HDB) to explore how olfactory learning could regulate cholinergic inputs to the system in a closed feedback loop. In our model, the novelty of an odor is reflected in firing rates and sparseness of cortical neurons in response to that odor and these firing rates can directly regulate learning in the system by modifying cholinergic inputs to the system.

ePosterNeuroscience

Predictive processing of natural images by V1 firing rates revealed by self-supervised deep neural networks

Cem Uran,Alina Peter,Andreea Lazar,William Barnes,Johanna Klon-Lipok,Katharine A Shapcott,Rasmus Roese,Pascal Fries,Wolf Singer,Martin Vinck

COSYNE 2022

ePosterNeuroscience

Predictive processing of natural images by V1 firing rates revealed by self-supervised deep neural networks

Cem Uran,Alina Peter,Andreea Lazar,William Barnes,Johanna Klon-Lipok,Katharine A Shapcott,Rasmus Roese,Pascal Fries,Wolf Singer,Martin Vinck

COSYNE 2022

ePosterNeuroscience

Driving effect of distal surround stimuli on primary visual cortex firing rates

Nisa Cuevas Vicente, Boris Sotomayor-Gómez, Athanasia Tzanou, Ana Broggini, Martin Vinck

FENS Forum 2024

ePosterNeuroscience

The repercussions of electrogenic Na+/K+-ATPase in excitable cells with high and variable firing rates

Liz Weerdmeester, Jan-Hendrik Schleimer, Susanne Schreiber

FENS Forum 2024

firing rate coverage

41 items

Seminar37
ePoster4
Domain spotlight

Explore how firing rate research is advancing inside Neuro.

Visit domain