Membrane Potential
membrane potential
In vivo direct imaging of neuronal activity at high temporospatial resolution
Advanced noninvasive neuroimaging methods provide valuable information on the brain function, but they have obvious pros and cons in terms of temporal and spatial resolution. Functional magnetic resonance imaging (fMRI) using blood-oxygenation-level-dependent (BOLD) effect provides good spatial resolution in the order of millimeters, but has a poor temporal resolution in the order of seconds due to slow hemodynamic responses to neuronal activation, providing indirect information on neuronal activity. In contrast, electroencephalography (EEG) and magnetoencephalography (MEG) provide excellent temporal resolution in the millisecond range, but spatial information is limited to centimeter scales. Therefore, there has been a longstanding demand for noninvasive brain imaging methods capable of detecting neuronal activity at both high temporal and spatial resolution. In this talk, I will introduce a novel approach that enables Direct Imaging of Neuronal Activity (DIANA) using MRI that can dynamically image neuronal spiking activity in milliseconds precision, achieved by data acquisition scheme of rapid 2D line scan synchronized with periodically applied functional stimuli. DIANA was demonstrated through in vivo mouse brain imaging on a 9.4T animal scanner during electrical whisker-pad stimulation. DIANA with milliseconds temporal resolution had high correlations with neuronal spike activities, which could also be applied in capturing the sequential propagation of neuronal activity along the thalamocortical pathway of brain networks. In terms of the contrast mechanism, DIANA was almost unaffected by hemodynamic responses, but was subject to changes in membrane potential-associated tissue relaxation times such as T2 relaxation time. DIANA is expected to break new ground in brain science by providing an in-depth understanding of the hierarchical functional organization of the brain, including the spatiotemporal dynamics of neural networks.
Dynamics of cortical circuits: underlying mechanisms and computational implications
A signature feature of cortical circuits is the irregularity of neuronal firing, which manifests itself in the high temporal variability of spiking and the broad distribution of rates. Theoretical works have shown that this feature emerges dynamically in network models if coupling between cells is strong, i.e. if the mean number of synapses per neuron K is large and synaptic efficacy is of order 1/\sqrt{K}. However, the degree to which these models capture the mechanisms underlying neuronal firing in cortical circuits is not fully understood. Results have been derived using neuron models with current-based synapses, i.e. neglecting the dependence of synaptic current on the membrane potential, and an understanding of how irregular firing emerges in models with conductance-based synapses is still lacking. Moreover, at odds with the nonlinear responses to multiple stimuli observed in cortex, network models with strongly coupled cells respond linearly to inputs. In this talk, I will discuss the emergence of irregular firing and nonlinear response in networks of leaky integrate-and-fire neurons. First, I will show that, when synapses are conductance-based, irregular firing emerges if synaptic efficacy is of order 1/\log(K) and, unlike in current-based models, persists even under the large heterogeneity of connections which has been reported experimentally. I will then describe an analysis of neural responses as a function of coupling strength and show that, while a linear input-output relation is ubiquitous at strong coupling, nonlinear responses are prominent at moderate coupling. I will conclude by discussing experimental evidence of moderate coupling and loose balance in the mouse cortex.
Optimization at the Single Neuron Level: Prediction of Spike Sequences and Emergence of Synaptic Plasticity Mechanisms
Intelligent behavior depends on the brain’s ability to anticipate future events. However, the learning rules that enable neurons to predict and fire ahead of sensory inputs remain largely unknown. We propose a plasticity rule based on pre-dictive processing, where the neuron learns a low-rank model of the synaptic input dynamics in its membrane potential. Neurons thereby amplify those synapses that maximally predict other synaptic inputs based on their temporal relations, which provide a solution to an optimization problem that can be implemented at the single-neuron level using only local information. Consequently, neurons learn sequences over long timescales and shift their spikes towards the first inputs in a sequence. We show that this mechanism can explain the development of anticipatory motion signaling and recall in the visual system. Furthermore, we demonstrate that the learning rule gives rise to several experimentally observed STDP (spike-timing-dependent plasticity) mechanisms. These findings suggest prediction as a guiding principle to orchestrate learning and synaptic plasticity in single neurons.
Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems
Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.
MBI Webinar on preclinical research into brain tumours and neurodegenerative disorders
WEBINAR 1 Breaking the barrier: Using focused ultrasound for the development of targeted therapies for brain tumours presented by Dr Ekaterina (Caty) Salimova, Monash Biomedical Imaging Glioblastoma multiforme (GBM) - brain cancer - is aggressive and difficult to treat as systemic therapies are hindered by the blood-brain barrier (BBB). Focused ultrasound (FUS) - a non-invasive technique that can induce targeted temporary disruption of the BBB – is a promising tool to improve GBM treatments. In this webinar, Dr Ekaterina Salimova will discuss the MRI-guided FUS modality at MBI and her research to develop novel targeted therapies for brain tumours. Dr Ekaterina (Caty) Salimova is a Research Fellow in the Preclinical Team at Monash Biomedical Imaging. Her research interests include imaging cardiovascular disease and MRI-guided focused ultrasound for investigating new therapeutic targets in neuro-oncology. - WEBINAR 2 Disposition of the Kv1.3 inhibitory peptide HsTX1[R14A], a novel attenuator of neuroinflammation presented by Sanjeevini Babu Reddiar, Monash Institute of Pharmaceutical Sciences The voltage-gated potassium channel (Kv1.3) in microglia regulates membrane potential and pro-inflammatory functions, and non-selective blockade of Kv1.3 has shown anti-inflammatory and disease improvement in animal models of Alzheimer’s and Parkinson’s diseases. Therefore, specific inhibitors of pro-inflammatory microglial processes with CNS bioavailability are urgently needed, as disease-modifying treatments for neurodegenerative disorders are lacking. In this webinar, PhD candidate Ms Sanju Reddiar will discuss the synthesis and biodistribution of a Kv1.3-inhibitory peptide using a [64Cu]Cu-DOTA labelled conjugate. Sanjeevini Babu Reddiar is a PhD student at the Monash Institute of Pharmaceutical Sciences. She is working on a project identifying the factors governing the brain disposition and blood-brain barrier permeability of a Kv1.3-blocking peptide.
How does the metabolically-expensive mammalian brain adapt to food scarcity?
Information processing is energetically expensive. In the mammalian brain, it is unclear how information coding and energy usage are regulated during food scarcity. I addressed this in the visual cortex of awake mice using whole-cell recordings and two-photon imaging to monitor layer 2/3 neuronal activity and ATP usage. I found that food restriction reduced synaptic ATP usage by 29% through a decrease in AMPA receptor conductance. Neuronal excitability was nonetheless preserved by a compensatory increase in input resistance and a depolarized resting membrane potential. Consequently, neurons spiked at similar rates as controls, but spent less ATP on underlying excitatory currents. This energy-saving strategy had a cost since it amplified the variability of visually-evoked subthreshold responses, leading to a 32% broadening in orientation tuning and impaired fine visual discrimination. This reduction in coding precision was associated with reduced levels of the fat mass-regulated hormone leptin and was restored by exogenous leptin supplementation. These findings reveal novel mechanisms that dynamically regulate energy usage and coding precision in neocortex.
Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli
Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.
Event-based Backpropagation for Exact Gradients in Spiking Neural Networks
Gradient-based optimization powered by the backpropagation algorithm proved to be the pivotal method in the training of non-spiking artificial neural networks. At the same time, spiking neural networks hold the promise for efficient processing of real-world sensory data by communicating using discrete events in continuous time. We derive the backpropagation algorithm for a recurrent network of spiking (leaky integrate-and-fire) neurons with hard thresholds and show that the backward dynamics amount to an event-based backpropagation of errors through time. Our derivation uses the jump conditions for partial derivatives at state discontinuities found by applying the implicit function theorem, allowing us to avoid approximations or substitutions. We find that the gradient exists and is finite almost everywhere in weight space, up to the null set where a membrane potential is precisely tangent to the threshold. Our presented algorithm, EventProp, computes the exact gradient with respect to a general loss function based on spike times and membrane potentials. Crucially, the algorithm allows for an event-based communication scheme in the backward phase, retaining the potential advantages of temporal sparsity afforded by spiking neural networks. We demonstrate the optimization of spiking networks using gradients computed via EventProp and the Yin-Yang and MNIST datasets with either a spike time-based or voltage-based loss function and report competitive performance. Our work supports the rigorous study of gradient-based optimization in spiking neural networks as well as the development of event-based neuromorphic architectures for the efficient training of spiking neural networks. While we consider the leaky integrate-and-fire model in this work, our methodology generalises to any neuron model defined as a hybrid dynamical system.
Neocortex saves energy by reducing coding precision during food scarcity
Information processing is energetically expensive. In the mammalian brain, it is unclear how information coding and energy usage are regulated during food scarcity. We addressed this in the visual cortex of awake mice using whole-cell patch clamp recordings and two-photon imaging to monitor layer 2/3 neuronal activity and ATP usage. We found that food restriction resulted in energy savings through a decrease in AMPA receptor conductance, reducing synaptic ATP usage by 29%. Neuronal excitability was nonetheless preserved by a compensatory increase in input resistance and a depolarized resting membrane potential. Consequently, neurons spiked at similar rates as controls, but spent less ATP on underlying excitatory currents. This energy-saving strategy had a cost since it amplified the variability of visually-evoked subthreshold responses, leading to a 32% broadening in orientation tuning and impaired fine visual discrimination. These findings reveal novel mechanisms that dynamically regulate energy usage and coding precision in neocortex.
New tools for monitoring & manipulating cellular function
Dr. Looger will discuss reagents for tracking Ca2+, membrane potential ("voltage"), glutamate, GABA, acetylcholine, serotonin, dopamine, etc. He will also cover optogenetics tools and methods for correlative light/electron microscopy. They make all tools freely available to everyone and work to get them in the hands of people that have limited resources.
Neural control of motor actions: from whole-brain landscape to millisecond dynamics
Animals control motor actions at multiple timescales. We use larval zebrafish and advanced optical microscopy to understand the underlying neural mechanisms. First, we examined the mechanisms of short-term motor learning by using whole-brain neural activity imaging. We found that the 5-HT system integrates the sensory outcome of actions and determines future motor patterns. Second, we established a method for recording spiking activity and membrane potential from a population of neurons during behavior. We identified putative motor command signals and internal copy signals that encode millisecond-scale details of the swimming dynamics. These results demonstrate that zebrafish provide a holistic and mechanistic understanding of the neural basis of motor control in vertebrate brains.
Cellular mechanisms behind stimulus evoked quenching of variability
A wealth of experimental studies show that the trial-to-trial variability of neuronal activity is quenched during stimulus evoked responses. This fact has helped ground a popular view that the variability of spiking activity can be decomposed into two components. The first is due to irregular spike timing conditioned on the firing rate of a neuron (i.e. a Poisson process), and the second is the trial-to-trial variability of the firing rate itself. Quenching of the variability of the overall response is assumed to be a reflection of a suppression of firing rate variability. Network models have explained this phenomenon through a variety of circuit mechanisms. However, in all cases, from the vantage of a neuron embedded within the network, quenching of its response variability is inherited from its synaptic input. We analyze in vivo whole cell recordings from principal cells in layer (L) 2/3 of mouse visual cortex. While the variability of the membrane potential is quenched upon stimulation, the variability of excitatory and inhibitory currents afferent to the neuron are amplified. This discord complicates the simple inheritance assumption that underpins network models of neuronal variability. We propose and validate an alternative (yet not mutually exclusive) mechanism for the quenching of neuronal variability. We show how an increase in synaptic conductance in the evoked state shunts the transfer of current to the membrane potential, formally decoupling changes in their trial-to-trial variability. The ubiquity of conductance based neuronal transfer combined with the simplicity of our model, provides an appealing framework. In particular, it shows how the dependence of cellular properties upon neuronal state is a critical, yet often ignored, factor. Further, our mechanism does not require a decomposition of variability into spiking and firing rate components, thereby challenging a long held view of neuronal activity.
Rapid State Changes Account for Apparent Brain and Behavior Variability
Neural and behavioral responses to sensory stimuli are notoriously variable from trial to trial. Does this mean the brain is inherently noisy or that we don’t completely understand the nature of the brain and behavior? Here we monitor the state of activity of the animal through videography of the face, including pupil and whisker movements, as well as walking, while also monitoring the ability of the animal to perform a difficult auditory or visual task. We find that the state of the animal is continuously changing and is never stable. The animal is constantly becoming more or less activated (aroused) on a second and subsecond scale. These changes in state are reflected in all of the neural systems we have measured, including cortical, thalamic, and neuromodulatory activity. Rapid changes in cortical activity are highly correlated with changes in neural responses to sensory stimuli and the ability of the animal to perform auditory or visual detection tasks. On the intracellular level, these changes in forebrain activity are associated with large changes in neuronal membrane potential and the nature of network activity (e.g. from slow rhythm generation to sustained activation and depolarization). Monitoring cholinergic and noradrenergic axonal activity reveals widespread correlations across the cortex. However, we suggest that a significant component of these rapid state changes arise from glutamatergic pathways (e.g. corticocortical or thalamocortical), owing to their rapidity. Understanding the neural mechanisms of state-dependent variations in brain and behavior promises to significantly “denoise” our understanding of the brain.
The influence of the membrane potential on inhibitory regulation of plasticity predictions and learned representations
Bernstein Conference 2024
Membrane potential up/down-states enhance synaptic transmission in the human neocortex – A framework for memory consolidation during slow wave sleep
FENS Forum 2024
Spontaneous and tonic action potential firing is sustained by membrane potential instabilities in peripheral sensory neurons
FENS Forum 2024