← Back

Spikes

Topic spotlight
TopicWorld Wide

spikes

Discover seminars, jobs, and research tagged with spikes across World Wide.
45 curated items33 Seminars11 ePosters1 Position
Updated 1 day ago
45 items · spikes
45 results
SeminarNeuroscience

From Spiking Predictive Coding to Learning Abstract Object Representation

Prof. Jochen Triesch
Frankfurt Institute for Advanced Studies
Jun 11, 2025

In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.

SeminarNeuroscience

Identifying mechanisms of cognitive computations from spikes

Tatiana Engel
Princeton
Nov 2, 2023

Higher cortical areas carry a wide range of sensory, cognitive, and motor signals supporting complex goal-directed behavior. These signals mix in heterogeneous responses of single neurons, making it difficult to untangle underlying mechanisms. I will present two approaches for revealing interpretable circuit mechanisms from heterogeneous neural responses during cognitive tasks. First, I will show a flexible nonparametric framework for simultaneously inferring population dynamics on single trials and tuning functions of individual neurons to the latent population state. When applied to recordings from the premotor cortex during decision-making, our approach revealed that populations of neurons encoded the same dynamic variable predicting choices, and heterogeneous firing rates resulted from the diverse tuning of single neurons to this decision variable. The inferred dynamics indicated an attractor mechanism for decision computation. Second, I will show an approach for inferring an interpretable network model of a cognitive task—the latent circuit—from neural response data. We developed a theory to causally validate latent circuit mechanisms via patterned perturbations of activity and connectivity in the high-dimensional network. This work opens new possibilities for deriving testable mechanistic hypotheses from complex neural response data.

SeminarNeuroscience

Why spikes?

Romaine Brette
Institut de la Vision
May 30, 2023

On a fast timescale, neurons mostly interact by short, stereotypical electrical impulses or spikes. Why? A common answer is that spikes are useful for long-distance communication, to avoid alterations while traveling along axons. But as it turns out, spikes are seen in many places outside neurons: in the heart, in muscles, in plants and even in protists. From these examples, it appears that action potentials mediate some form of coordinated action, a timed event. From this perspective, spikes should not be seen simply as noisy implementations of underlying continuous signals (a sort of analog-to-digital conversion), but rather as events or actions. I will give a number of examples of functional spike-based interactions in living systems.

SeminarNeuroscience

Precise spatio-temporal spike patterns in cortex and model

Sonia Gruen
Forschungszentrum Jülich, Germany
Apr 25, 2023

The cell assembly hypothesis postulates that groups of coordinated neurons form the basis of information processing. Here, we test this hypothesis by analyzing massively parallel spiking activity recorded in monkey motor cortex during a reach-to-grasp experiment for the presence of significant ms-precise spatio-temporal spike patterns (STPs). For this purpose, the parallel spike trains were analyzed for STPs by the SPADE method (Stella et al, 2019, Biosystems), which detects, counts and evaluates spike patterns for their significance by the use of surrogates (Stella et al, 2022 eNeuro). As a result we find STPs in 19/20 data sets (each of 15min) from two monkeys, but only a small fraction of the recorded neurons are involved in STPs. To consider the different behavioral states during the task, we analyzed the data in a quasi time-resolved analysis by dividing the data into behaviorally relevant time epochs. The STPs that occur in the various epochs are specific to behavioral context - in terms of neurons involved and temporal lags between the spikes of the STP. Furthermore we find, that the STPs often share individual neurons across epochs. Since we interprete the occurrence of a particular STP as the signature of a particular active cell assembly, our interpretation is that the neurons multiplex their cell assembly membership. In a related study, we model these findings by networks with embedded synfire chains (Kleinjohann et al, 2022, bioRxiv 2022.08.02.502431).

SeminarNeuroscience

From spikes to factors: understanding large-scale neural computations

Mark M. Churchland
Columbia University, New York, USA
Apr 5, 2023

It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.

SeminarNeuroscienceRecording

Cortical seizure mechanisms: insights from calcium, glutamate and GABA imaging

Dimitri Kullmann
University College London
Jan 17, 2023

Focal neocortical epilepsy is associated with intermittent brief population discharges (interictal spikes), which resemble sentinel spikes that often occur at the onset of seizures. Why interictal spikes self-terminate whilst seizures persist and propagate is incompletely understood, but is likely to relate to the intermittent collapse of feed-forward GABAergic inhibition. Inhibition could fail through multiple mechanisms, including (i) an attenuation or even reversal of the driving force for chloride in postsynaptic neurons because of intense activation of GABAA receptors, (ii) an elevation of potassium secondary to chloride influx leading to depolarization of neurons, or (iii) insufficient GABA release from interneurons. I shall describe the results of experiments using fluorescence imaging of calcium, glutamate or GABA in awake rodent models of neocortical epileptiform activity. Interictal spikes were accompanied by brief glutamate transients which were maximal at the initiation site and rapidly propagatedcentrifugally. GABA transients lasted longer than glutamate transients and were maximal ~1.5 mm from the focus. Prior to seizure initiation GABA transients were attenuated, whilst glutamate transients increased, consistent with a progressive failure of local inhibitory restraint. As seizures increased in frequency, there was a gradual increase in the spatial extent of spike-associated glutamate transients associated with interictal spikes. Neurotransmitter imaging thus reveals a progressive collapse of an annulus of feed-forward GABA release, allowing runaway recruitment of excitatory neurons as a fundamental mechanism underlying the escape of seizures from local inhibitory restraint.

SeminarNeuroscienceRecording

Universal function approximation in balanced spiking networks through convex-concave boundary composition

W. F. Podlaski
Champalimaud
Nov 9, 2022

The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.

SeminarNeuroscience

Setting network states via the dynamics of action potential generation

Susanne Schreiber
Humboldt University Berlin, Germany
Oct 4, 2022

To understand neural computation and the dynamics in the brain, we usually focus on the connectivity among neurons. In contrast, the properties of single neurons are often thought to be negligible, at least as far as the activity of networks is concerned. In this talk, I will contradict this notion and demonstrate how the biophysics of action-potential generation can have a decisive impact on network behaviour. Our recent theoretical work shows that, among regularly firing neurons, the somewhat unattended homoclinic type (characterized by a spike onset via a saddle homoclinic orbit bifurcation) particularly stands out: First, spikes of this type foster specific network states - synchronization in inhibitory and splayed-out/frustrated states in excitatory networks. Second, homoclinic spikes can easily be induced by changes in a variety of physiological parameters (like temperature, extracellular potassium, or dendritic morphology). As a consequence, such parameter changes can even induce switches in network states, solely based on a modification of cellular voltage dynamics. I will provide first experimental evidence and discuss functional consequences of homoclinic spikes for the design of efficient pattern-generating motor circuits in insects as well as for mammalian pathologies like febrile seizures. Our analysis predicts an interesting role for homoclinic action potentials as an integral part of brain dynamics in both health and disease.

SeminarNeuroscienceRecording

A transcriptomic axis predicts state modulation of cortical interneurons

Stephane Bugeon
Harris & Carandini's lab, UCL
Apr 26, 2022

Transcriptomics has revealed that cortical inhibitory neurons exhibit a great diversity of fine molecular subtypes, but it is not known whether these subtypes have correspondingly diverse activity patterns in the living brain. We show that inhibitory subtypes in primary visual cortex (V1) have diverse correlates with brain state, but that this diversity is organized by a single factor: position along their main axis of transcriptomic variation. We combined in vivo 2-photon calcium imaging of mouse V1 with a novel transcriptomic method to identify mRNAs for 72 selected genes in ex vivo slices. We classified inhibitory neurons imaged in layers 1-3 into a three-level hierarchy of 5 Subclasses, 11 Types, and 35 Subtypes using previously-defined transcriptomic clusters. Responses to visual stimuli differed significantly only across Subclasses, suppressing cells in the Sncg Subclass while driving cells in the other Subclasses. Modulation by brain state differed at all hierarchical levels but could be largely predicted from the first transcriptomic principal component, which also predicted correlations with simultaneously recorded cells. Inhibitory Subtypes that fired more in resting, oscillatory brain states have less axon in layer 1, narrower spikes, lower input resistance and weaker adaptation as determined in vitro and express more inhibitory cholinergic receptors. Subtypes firing more during arousal had the opposite properties. Thus, a simple principle may largely explain how diverse inhibitory V1 Subtypes shape state-dependent cortical processing.

SeminarNeuroscienceRecording

Transcriptional adaptation couples past experience and future sensory responses

Tatsuya Tsukahara
Datta lab, Harvard Medical School
Apr 26, 2022

Animals traversing different environments encounter both stable background stimuli and novel cues, which are generally thought to be detected by primary sensory neurons and then distinguished by downstream brain circuits. Sensory adaptation is a neural mechanism that filters background by minimizing responses to stable sensory stimuli, and a fundamental feature of sensory systems. Adaptation over relatively fast timescales (milliseconds to minutes) have been reported in many sensory systems. However, adaptation to persistent environmental stimuli over longer timescales (hours to days) have been largely unexplored, even though those timescales are ethologically important since animals typically stay in one environment for hours. I showed that each of the ~1,000 olfactory sensory neuron (OSN) subtypes in the mouse harbors a distinct transcriptome whose content is precisely determined by interactions between its odorant receptor and the environment. This transcriptional variation is systematically organized to support sensory adaptation: expression levels of many genes relevant to transforming odors into spikes continuously vary across OSN subtypes, dynamically adjust to new environments over hours, and accurately predict acute OSN-specific odor responses. The sensory periphery therefore separates salient signals from predictable background via a transcriptional mechanism whose moment-to-moment state reflects the past and constrains the future; these findings suggest a general model in which structured transcriptional variation within a cell type reflects individual experience.

SeminarNeuroscienceRecording

Population coding in the cerebellum: a machine learning perspective

Reza Shadmehr
Johns Hopkins School of Medicine
Apr 5, 2022

The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.

SeminarNeuroscienceRecording

Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics

Robert Guetig
Charité – Universitätsmedizin Berlin & BIH
Mar 8, 2022

Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.

SeminarNeuroscienceRecording

Metabolic spikes: from rogue electrons to Parkinson's

Chaitanya Chintaluri
Vogels Lab, IST Austria
Feb 22, 2022

Conventionally, neurons are thought to be cellular units that process synaptic inputs into synaptic spikes. However, it is well known that neurons can also spike spontaneously and display a rich repertoire of firing properties with no apparent functional relevance e.g. in in vitro cortical slice preparations. In this talk, I will propose a hypothesis according to which intrinsic excitability in neurons may be a survival mechanism to minimize toxic byproducts of the cell’s energy metabolism. In neurons, this toxicity can arise when mitochondrial ATP production stalls due to limited ADP. Under these conditions, electrons deviate from the electron transport chain to produce reactive oxygen species, disrupting many cellular processes and challenging cell survival. To mitigate this, neurons may engage in ADP-producing metabolic spikes. I will explore the validity of this hypothesis using computational models that illustrate the implications of synaptic and metabolic spiking, especially in the context of substantia nigra pars compacta dopaminergic neurons and their degeneration in Parkinson's disease.

SeminarNeuroscienceRecording

NMC4 Short Talk: A theory for the population rate of adapting neurons disambiguates mean vs. variance-driven dynamics and explains log-normal response statistics

Laureline Logiaco (she/her)
Columbia University
Dec 1, 2021

Recently, the field of computational neuroscience has seen an explosion of the use of trained recurrent network models (RNNs) to model patterns of neural activity. These RNN models are typically characterized by tuned recurrent interactions between rate 'units' whose dynamics are governed by smooth, continuous differential equations. However, the response of biological single neurons is better described by all-or-none events - spikes - that are triggered in response to the processing of their synaptic input by the complex dynamics of their membrane. One line of research has attempted to resolve this discrepancy by linking the average firing probability of a population of simplified spiking neuron models to rate dynamics similar to those used for RNN units. However, challenges remain to account for complex temporal dependencies in the biological single neuron response and for the heterogeneity of synaptic input across the population. Here, we make progress by showing how to derive dynamic rate equations for a population of spiking neurons with multi-timescale adaptation properties - as this was shown to accurately model the response of biological neurons - while they receive independent time-varying inputs, leading to plausible asynchronous activity in the network. The resulting rate equations yield an insightful segregation of the population's response into dynamics that are driven by the mean signal received by the neural population, and dynamics driven by the variance of the input across neurons, with respective timescales that are in agreement with slice experiments. Further, these equations explain how input variability can shape log-normal instantaneous rate distributions across neurons, as observed in vivo. Our results help interpret properties of the neural population response and open the way to investigating whether the more biologically plausible and dynamically complex rate model we derive could provide useful inductive biases if used in an RNN to solve specific tasks.

SeminarNeuroscienceRecording

NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule

Patricia Rubisch (she/her)
University of Edinburgh
Dec 1, 2021

Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.

SeminarNeuroscience

Multi-scale synaptic analysis for psychiatric/emotional disorders

Akiko Hayashi-Takagi
RIKEN CBS
Jun 30, 2021

Dysregulation of emotional processing and its integration with cognitive functions are central features of many mental/emotional disorders associated both with externalizing problems (aggressive, antisocial behaviors) and internalizing problems (anxiety, depression). As Dr. Joseph LeDoux, our invited speaker of this program, wrote in his famous book “Synaptic self: How Our Brains Become Who We Are”—the brain’s synapses—are the channels through which we think, act, imagine, feel, and remember. Synapses encode the essence of personality, enabling each of us to function as a distinctive, integrated individual from moment to moment. Thus, exploring the functioning of synapses leads to the understanding of the mechanism of (patho)physiological function of our brain. In this context, we have investigated the pathophysiology of psychiatric disorders, with particular emphasis on the synaptic function of model mice of various psychiatric disorders such as schizophrenia, autism, depression, and PTSD. Our current interest is how synaptic inputs are integrated to generate the action potential. Because the spatiotemporal organization of neuronal firing is crucial for information processing, but how thousands of inputs to the dendritic spines drive the firing remains a central question in neuroscience. We identified a distinct pattern of synaptic integration in the disease-related models, in which extra-large (XL) spines generate NMDA spikes within these spines, which was sufficient to drive neuronal firing. We experimentally and theoretically observed that XL spines negatively correlated with working memory. Our work offers a whole new concept for dendritic computation and network dynamics, and the understanding of psychiatric research will be greatly reconsidered. The second half of my talk is the development of a novel synaptic tool. Because, no matter how beautifully we can illuminate the spine morphology and how accurately we can quantify the synaptic integration, the links between synapse and brain function remain correlational. In order to challenge the causal relationship between synapse and brain function, we established AS-PaRac1, which is unique not only because it can specifically label and manipulate the recently potentiated dendritic spine (Hayashi-Takagi et al, 2015, Nature). With use of AS-PaRac1, we developed an activity-dependent simultaneous labeling of the presynaptic bouton and the potentiated spines to establish “functional connectomics” in a synaptic resolution. When we apply this new imaging method for PTSD model mice, we identified a completely new functional neural circuit of brain region A→B→C with a very strong S/N in the PTSD model mice. This novel tool of “functional connectomics” and its photo-manipulation could open up new areas of emotional/psychiatric research, and by extension, shed light on the neural networks that determine who we are.

SeminarNeuroscienceRecording

Prof. Humphries reads from "The Spike" 📖

Mark Humphries
University of Nottingham
Jun 7, 2021

We see the last cookie in the box and think, can I take that? We reach a hand out. In the 2.1 seconds that this impulse travels through our brain, billions of neurons communicate with one another, sending blips of voltage through our sensory and motor regions. Neuroscientists call these blips “spikes.” Spikes enable us to do everything: talk, eat, run, see, plan, and decide. In The Spike, Mark Humphries takes readers on the epic journey of a spike through a single, brief reaction. In vivid language, Humphries tells the story of what happens in our brain, what we know about spikes, and what we still have left to understand about them. Drawing on decades of research in neuroscience, Humphries explores how spikes are born, how they are transmitted, and how they lead us to action. He dives into previously unanswered mysteries: Why are most neurons silent? What causes neurons to fire spikes spontaneously, without input from other neurons or the outside world? Why do most spikes fail to reach any destination? Humphries presents a new vision of the brain, one where fundamental computations are carried out by spontaneous spikes that predict what will happen in the world, helping us to perceive, decide, and react quickly enough for our survival. Traversing neuroscience’s expansive terrain, The Spike follows a single electrical response to illuminate how our extraordinary brains work.

SeminarNeuroscienceRecording

Inhibitory neural circuit mechanisms underlying neural coding of sensory information in the neocortex

Jeehyun Kwag
Korea University
Jan 28, 2021

Neural codes, such as temporal codes (precisely timed spikes) and rate codes (instantaneous spike firing rates), are believed to be used in encoding sensory information into spike trains of cortical neurons. Temporal and rate codes co-exist in the spike train and such multiplexed neural code-carrying spike trains have been shown to be spatially synchronized in multiple neurons across different cortical layers during sensory information processing. Inhibition is suggested to promote such synchronization, but it is unclear whether distinct subtypes of interneurons make different contributions in the synchronization of multiplexed neural codes. To test this, in vivo single-unit recordings from barrel cortex were combined with optogenetic manipulations to determine the contributions of parvalbumin (PV)- and somatostatin (SST)-positive interneurons to synchronization of precisely timed spike sequences. We found that PV interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are low (<12 Hz), whereas SST interneurons preferentially promote the synchronization of spike times when instantaneous firing rates are high (>12 Hz). Furthermore, using a computational model, we demonstrate that these effects can be explained by PV and SST interneurons having preferential contribution to feedforward and feedback inhibition, respectively. Overall, these results show that PV and SST interneurons have distinct frequency (rate code)-selective roles in dynamically gating the synchronization of spike times (temporal code) through preferentially recruiting feedforward and feedback inhibitory circuit motifs. The inhibitory neural circuit mechanisms we uncovered here his may have critical roles in regulating neural code-based somatosensory information processing in the neocortex.

SeminarNeuroscienceRecording

Retrieval spikes: a dendritic mechanism for retrieval-dependent memory consolidation

Erez Geron
NYU
Dec 15, 2020
SeminarNeuroscience

Targeting aberrant dendritic integration to treat cognitive comorbidities of epilepsy

Heinz Beck
Institute for Experimental Epileptology and Cognition
Nov 17, 2020

Memory deficits are a debilitating symptom of epilepsy, but little is known about mechanisms underlying cognitive deficits. Here, we describe a Na+ channel-dependent mechanism underlying altered hippocampal dendritic integration, degraded place coding, and deficits in spatial memory. Two-photon glutamate uncaging experiments revealed that the mechanisms constraining the generation of Na+ spikes in hippocampal 1st order pyramidal cell dendrites are profoundly degraded in experimental epilepsy. This phenomenon was reversed by selectively blocking Nav1.3 sodium channels. In-vivo two-photon imaging revealed that hippocampal spatial representations were less precise in epileptic mice. Blocking Nav1.3 channels significantly improved the precision of spatial coding, and reversed hippocampal memory deficits. Thus, a dendritic channelopathy may underlie cognitive deficits in epilepsy and targeting it pharmacologically may constitute a new avenue to enhance cognition.

SeminarNeuroscienceRecording

Learning Neurobiology with electric fish

Angel Caputi, MD, PhD
Profesor Titular de Investigación, Departamento de Neurociencias Integrativas y Computacionales
Nov 15, 2020

Electric Gymnotiform fish live in muddy, shallow waters near the shore – hiding in the dense filamentous roots of floating plants such as Eichornia crassipes (“camalote”). They explore their surroundings by using a series of electric pulses that serve as self emitted carrier of electrosensory signals. This propagates at the speed of light through this spongiform habitat and is barely sensed by the lateral line of predators and prey. The emitted field polarizes the surroundings according to the difference in impedance with water which in turn modifies the profile of transcutaneous currents considered as an electrosensory image. Using this system, pulse Gymnotiformes create an electrosensory bubble where an object’s location, impedance, size and other characteristics are discriminated and probably recognized. Although consciousness is still not well-proven, cognitive functions as volition, attention, and path integration have been shown. Here I will summarize different aspects of the electromotor electrosensory loop of pulse Gymnotiforms. First, I will address how objects are polarized with a stereotyped but temporospatially complex electric field, consisting of brief pulses emitted at regular intervals. This relies on complex electric organs quasi periodically activated through an electromotor coordination system by a pacemaker in the medulla. Second, I will deal with the imaging mechanisms of pulse gymnotiform fish and the presence of two regions in the electrosensory field, a rostral region where the field time course is coherent and field vector direction is constant all along the electric organ discharge and a lateral region where the field time course is site specific and field vector direction describes a stereotyped 3D trajectory. Third, I will describe the electrosensory mosaic and their characteristics. Receptor and primary afferents correspond one to one showing subtypes optimally responding to the time course of the self generated pulse with a characteristic train of spikes. While polarized objects at the rostral region project their electric images on the perioral region where electrosensory receptor density, subtypes and central projection are maximal, the image of objects on the side recruit a single type of scattered receptors. Therefore, the rostral mosaic has been likened to an electrosensory fovea and its receptive field referred to as foveal field. The rest of the mosaic and field are referred to as peripheral. Finally, I will describe ongoing work on early processing structures. I will try to generate an integrated view, including anatomical and functional data obtained in vitro, acute experiments, and unitary recordings in freely moving fish. We have recently shown have shown that these fish tract allo-generated fields and the virtual fields generated by nearby objects in the presence of self-generated fields to explore the nearby environment. These data together with the presence of a multimodal receptor mosaic at the cutaneous surface particularly surrounding the mouth and an important role of proprioception in early sensory processing suggests the hypothesis that the active electrosensory system is part of a multimodal haptic sense.

SeminarNeuroscienceRecording

Fast and deep neuromorphic learning with time-to-first-spike coding

Julian Goeltz
Universität Bern
Aug 31, 2020

Engineered pattern-recognition systems strive for short time-to-solution and low energy-to-solution characteristics. This represents one of the main driving forces behind the development of neuromorphic devices. For both them and their biological archetypes, this corresponds to using as few spikes as early as possible. The concept of few and early spikes is used as the founding principle in the time-to-first-spike coding scheme. Within this framework, we have developed a spike-timing-based learning algorithm, which we used to train neuronal networks on the mixed-signal neuromorphic platform BrainScaleS-2. We derive, from first principles, error-backpropagation-based learning in networks of leaky integrate-and-fire (LIF) neurons relying only on spike times, for specific configurations of neuronal and synaptic time constants. We explicitly examine applicability to neuromorphic substrates by studying the effects of reduced weight precision and range, as well as of parameter noise. We demonstrate the feasibility of our approach on continuous and discrete data spaces, both in software simulations and on BrainScaleS-2. This narrows the gap between previous models of first-spike-time learning and biological neuronal dynamics and paves the way for fast and energy-efficient neuromorphic applications.

SeminarNeuroscienceRecording

Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits

Richard Naud
University of Ottawa
Aug 31, 2020

Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well-established that it depends on pre and postsynaptic activity. However, models that rely solely on pre and postsynaptic activity for synaptic changes have, to date, not been able to account for learning complex tasks that demand hierarchical networks. Here, we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then neurons higher in the hierarchy can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites, and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses, and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.

SeminarNeuroscienceRecording

Back-propagation in spiking neural networks

Timothee Masquelier
Centre national de la recherche scientifique, CNRS | Toulouse
Aug 31, 2020

Back-propagation is a powerful supervised learning algorithm in artificial neural networks, because it solves the credit assignment problem (essentially: what should the hidden layers do?). This algorithm has led to the deep learning revolution. But unfortunately, back-propagation cannot be used directly in spiking neural networks (SNN). Indeed, it requires differentiable activation functions, whereas spikes are all-or-none events which cause discontinuities. Here we present two strategies to overcome this problem. The first one is to use a so-called 'surrogate gradient', that is to approximate the derivative of the threshold function with the derivative of a sigmoid. We will present some applications of this method for time series processing (audio, internet traffic, EEG). The second one concerns a specific class of SNNs, which process static inputs using latency coding with at most one spike per neuron. Using approximations, we derived a latency-based back-propagation rule for this sort of networks, called S4NN, and applied it to image classification.

SeminarNeuroscienceRecording

On temporal coding in spiking neural networks with alpha synaptic function

Iulia M. Comsa
Google Research Zürich, Switzerland
Aug 30, 2020

The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.

SeminarNeuroscience

Synaptic, cellular, and circuit mechanisms for learning: insights from electric fish

Nate Sawtell
Columbia University
Jul 5, 2020

Understanding learning in neural circuits requires answering a number of difficult questions: (1) What is the computation being performed and what is its behavioral significance? (2) What are the inputs required for the computation and how are they represented at the level of spikes? (3) What are the sites and rules governing plasticity, i.e. how do pre and post-synaptic activity patterns produce persistent changes in synaptic strength? (4) How does network connectivity and dynamics shape the computation being performed? I will discuss joint experimental and theoretical work addressing these questions in the context of the electrosensory lobe (ELL) of weakly electric mormyrid fish.

SeminarNeuroscienceRecording

Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits

Blake Richards
McGill University
Apr 2, 2020

Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well-established that it depends on pre and postsynaptic activity. However, models that rely solely on pre and postsynaptic activity for synaptic changes have, to date, not been able to account for learning complex tasks that demand hierarchical networks. Here, we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then neurons higher in the hierarchy can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites, and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses, and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.

ePoster

Migraine mutation of a Na+ channel induces a switch in excitability type and energetically expensive spikes in an experimentally-constrained model of fast-spiking neurons

Leonardo Preuss, Jan-Hendrik Schleimer, Louisiane Lemaire, Susanne Schreiber

Bernstein Conference 2024

ePoster

Why spikes? - Analyzing event-based and analog controllers

Luke Eilers, Jean-Pascal Pfister

Bernstein Conference 2024

ePoster

Why spikes? A synaptic transmission perspective

Jonas Stapmanns, Jean-Pascal Pfister

Bernstein Conference 2024

ePoster

Turning spikes to space through plastic synaptic dynamics

COSYNE 2022

ePoster

Turning spikes to space through plastic synaptic dynamics

COSYNE 2022

ePoster

On the benefits of analog spikes: an information efficiency perspective

Jonas Stapmanns, Jean-Pascal Pfister

COSYNE 2025

ePoster

Contribution of dendritic Ca- and Na-spikes to burst firing in hippocampal place cells

Bence Fogel, Balazs B Ujfalussy

COSYNE 2025

ePoster

Cholinergic regulation of dendritic Ca2+ spikes controls firing mode of hippocampal CA3 pyramidal neurons

Noémi Kis, Balázs Lükő, Judit Herédi, Ádám Magó, Bela Erlinghagen, Mahboubeh Ahmadi, Snezana Raus Balind, Mátyás Irás, Balázs B. Ujfalussy, Judit K. Makara

FENS Forum 2024

ePoster

Emergence of NMDA-spikes: Unraveling network dynamics in pyramidal neurons

Michael Dick, Joshua Böttcher, David Dahmen, Willem Wybo, Abigail Morrison

FENS Forum 2024

ePoster

Encoding of locomotor signals by Purkinje cell complex spikes

Ana Goncalves, Francesco Costantino Costabile, Hugo Gravato Marques, Alice Geminiani, Jorge Ramirez-Buritica, Megan Rose Carey

FENS Forum 2024

ePoster

Identifying overlapping spikes in neural activity with unsupervised-subspace domain adaptation

Min-Ki Kim, Jeong-Woo Sohn

FENS Forum 2024