Threshold
threshold
Blood-brain barrier dysfunction in epilepsy: Time for translation
The neurovascular unit (NVU) consists of cerebral blood vessels, neurons, astrocytes, microglia, and pericytes. It plays a vital role in regulating blood flow and ensuring the proper functioning of neural circuits. Among other, this is made possible by the blood-brain barrier (BBB), which acts as both a physical and functional barrier. Previous studies have shown that dysfunction of the BBB is common in most neurological disorders and is associated with neural dysfunction. Our studies have demonstrated that BBB dysfunction results in the transformation of astrocytes through transforming growth factor beta (TGFβ) signaling. This leads to activation of the innate neuroinflammatory system, changes in the extracellular matrix, and pathological plasticity. These changes ultimately result in dysfunction of the cortical circuit, lower seizure threshold, and spontaneous seizures. Blocking TGFβ signaling and its associated pro-inflammatory pathway can prevent this cascade of events, reduces neuroinflammation, repairs BBB dysfunction, and prevents post-injury epilepsy, as shown in experimental rodents. To further understand and assess BBB integrity in human epilepsy, we developed a novel imaging technique that quantitatively measures BBB permeability. Our findings have confirmed that BBB dysfunction is common in patients with drug-resistant epilepsy and can assist in identifying the ictal-onset zone prior to surgery. Current clinical studies are ongoing to explore the potential of targeting BBB dysfunction as a novel treatment approach and investigate its role in drug resistance, the spread of seizures, and comorbidities associated with epilepsy.
Current and future trends in neuroimaging
With the advent of several different fMRI analysis tools and packages outside of the established ones (i.e., SPM, AFNI, and FSL), today's researcher may wonder what the best practices are for fMRI analysis. This talk will discuss some of the recent trends in neuroimaging, including design optimization and power analysis, standardized analysis pipelines such as fMRIPrep, and an overview of current recommendations for how to present neuroimaging results. Along the way we will discuss the balance between Type I and Type II errors with different correction mechanisms (e.g., Threshold-Free Cluster Enhancement and Equitable Thresholding and Clustering), as well as considerations for working with large open-access databases.
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss a recent paper by Taylor et al. (2023): https://www.sciencedirect.com/science/article/pii/S1053811923002896. They discuss the merits of highlighting results instead of hiding them; that is, clearly marking which voxels and clusters pass a given significance threshold, but still highlighting sub-threshold results, with opacity proportional to the strength of the effect. They use this to illustrate how there in fact may be more agreement between researchers than previously thought, using the NARPS dataset as an example. By adopting a continuous, "highlighted" approach, it becomes clear that the majority of effects are in the same location and that the effect size is in the same direction, compared to an approach that only permits rejecting or not rejecting the null hypothesis. We will also talk about the implications of this approach for creating figures, detecting artifacts, and aiding reproducibility.
Prox2+ and Runx3+ vagal sensory neurons regulate esophageal motility
Sensory neurons of the vagus nerve monitor distention and stretch in the gastrointestinal tract. We used genetically guided anatomical tracing, optogenetics and electrophysiology to identify and characterize two vagal sensory neuronal subtypes expressing Prox2 and Runx3. We show that these neuronal subtypes innervate the esophagus where they display regionalized innervation patterns. Electrophysiological analyses showed that they are both low threshold mechanoreceptors but possess different adaptation properties. Lastly, genetic ablation of Prox2 and Runx3 neurons demonstrated their essential roles for esophageal peristalsis and swallowing in freely behaving animals. Our work reveals the identity and function of the vagal neurons that provide mechanosensory feedback from the esophagus to the brain and could lead to better understanding and treatment of esophageal motility disorders.
A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures
The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.
Private oxytocin supply and its receptors in the hypothalamus for social avoidance learning
Many animals live in complex social groups. To survive, it is essential to know who to avoid and who to interact. Although naïve mice are naturally attracted to any adult conspecifics, a single defeat experience could elicit social avoidance towards the aggressor for days. The neural mechanisms underlying the behavior switch from social approach to social avoidance remains incompletely understood. Here, we identify oxytocin neurons in the retrochiasmatic supraoptic nucleus (SOROXT) and oxytocin receptor (OXTR) expressing cells in the anterior subdivision of ventromedial hypothalamus, ventrolateral part (aVMHvlOXTR) as a key circuit motif for defeat-induced social avoidance learning. After defeat, aVMHvlOXTR cells drastically increase their responses to aggressor cues. This response change is functionally important as optogenetic activation of aVMHvlOXTR cells elicits time-locked social avoidance towards a benign social target whereas inactivating the cells suppresses defeat-induced social avoidance. Furthermore, OXTR in the aVMHvl is itself essential for the behavior change. Knocking out OXTR in the aVMHvl or antagonizing the receptor during defeat, but not during post-defeat social interaction, impairs defeat-induced social avoidance. aVMHvlOXTR receives its private supply of oxytocin from SOROXT cells. SOROXT is highly activated by the noxious somatosensory inputs associated with defeat. Oxytocin released from SOROXT depolarizes aVMHvlOXTR cells and facilitates their synaptic potentiation, and hence, increases aVMHvlOXTR cell responses to aggressor cues. Ablating SOROXT cells impairs defeat-induced social avoidance learning whereas activating the cells promotes social avoidance after a subthreshold defeat experience. Altogether, our study reveals an essential role of SOROXT-aVMHvlOXTR circuit in defeat-induced social learning and highlights the importance of hypothalamic oxytocin system in social ranking and its plasticity.
Universal function approximation in balanced spiking networks through convex-concave boundary composition
The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.
A neural mechanism for terminating decisions
The brain makes decisions by accumulating evidence until there is enough to stop and choose. Neural mechanisms of evidence accumulation are well established in association cortex, but the site and mechanism of termination is unknown. Here, we elucidate a mechanism for termination by neurons in the primate superior colliculus. We recorded simultaneously from neurons in lateral intraparietal cortex (LIP) and the superior colliculus (SC) while monkeys made perceptual decisions, reported by eye-movements. Single-trial analyses revealed distinct dynamics: LIP tracked the accumulation of evidence on each decision, and SC generated one burst at the end of the decision, occasionally preceded by smaller bursts. We hypothesized that the bursts manifest a threshold mechanism applied to LIP activity to terminate the decision. Focal inactivation of SC produced behavioral effects diagnostic of an impaired threshold sensor, requiring a stronger LIP signal to terminate a decision. The results reveal the transformation from deliberation to commitment.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
A model of colour appearance based on efficient coding of natural images
An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.
Reprogramming the nociceptive circuit topology reshapes sexual behavior in C. elegans
In sexually reproducing species, males and females respond to environmental sensory cues and transform the input into sexually dimorphic traits. Yet, how sexually dimorphic behavior is encoded in the nervous system is poorly understood. We characterize the sexually dimorphic nociceptive behavior in C. elegans – hermaphrodites present a lower pain threshold than males in response to aversive stimuli, and study the underlying neuronal circuits, which are composed of the same neurons that are wired differently. By imaging receptor expression, calcium responses and glutamate secretion, we show that sensory transduction is similar in the two sexes, and therefore explore how downstream network topology shapes dimorphic behavior. We generated a computational model that replicates the observed dimorphic behavior, and used this model to predict simple network rewirings that would switch the behavior between the sexes. We then showed experimentally, using genetic manipulations, artificial gap junctions, automated tracking and optogenetics, that these subtle changes to male connectivity result in hermaphrodite-like aversive behavior in-vivo, while hermaphrodite behavior was more robust to perturbations. Strikingly, when presented with aversive cues, rewired males were compromised in finding mating partners, suggesting that the network topology that enables efficient avoidance of noxious cues would have a reproductive "cost". To summarize, we present a deconstruction of a sex-shared neural circuit that affects sexual behavior, and how to reprogram it. More broadly, our results are an example of how common neuronal circuits changed their function during evolution by subtle topological rewirings to account for different environmental and sexual needs.
Trading Off Performance and Energy in Spiking Networks
Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.
A draft connectome for ganglion cell types of the mouse retina
The visual system of the brain is highly parallel in its architecture. This is clearly evident in the outputs of the retina, which arise from neurons called ganglion cells. Work in our lab has shown that mammalian retinas contain more than a dozen distinct types of ganglion cells. Each type appears to filter the retinal image in a unique way and to relay this processed signal to a specific set of targets in the brain. My students and I are working to understand the meaning of this parallel organization through electrophysiological and anatomical studies. We record from light-responsive ganglion cells in vitro using the whole-cell patch method. This allows us to correlate directly the visual response properties, intrinsic electrical behavior, synaptic pharmacology, dendritic morphology and axonal projections of single neurons. Other methods used in the lab include neuroanatomical tracing techniques, single-unit recording and immunohistochemistry. We seek to specify the total number of ganglion cell types, the distinguishing characteristics of each type, and the intraretinal mechanisms (structural, electrical, and synaptic) that shape their stimulus selectivities. Recent work in the lab has identified a bizarre new ganglion cell type that is also a photoreceptor, capable of responding to light even when it is synaptically uncoupled from conventional (rod and cone) photoreceptors. These ganglion cells appear to play a key role in resetting the biological clock. It is just this sort of link, between a specific cell type and a well-defined behavioral or perceptual function, that we seek to establish for the full range of ganglion cell types. My research concerns the structural and functional organization of retinal ganglion cells, the output cells of the retina whose axons make up the optic nerve. Ganglion cells exhibit great diversity both in their morphology and in their responses to light stimuli. On this basis, they are divisible into a large number of types (>15). Each ganglion-cell type appears to send its outputs to a specific set of central visual nuclei. This suggests that ganglion cell heterogeneity has evolved to provide each visual center in the brain with pre-processed representations of the visual scene tailored to its specific functional requirements. Though the outline of this story has been appreciated for some time, it has received little systematic exploration. My laboratory is addressing in parallel three sets of related questions: 1) How many types of ganglion cells are there in a typical mammalian retina and what are their structural and functional characteristics? 2) What combination of synaptic networks and intrinsic membrane properties are responsible for the characteristic light responses of individual types? 3) What do the functional specializations of individual classes contribute to perceptual function or to visually mediated behavior? To pursue these questions, we label retinal ganglion cells by retrograde transport from the brain; analyze in vitro their light responses, intrinsic membrane properties and synaptic pharmacology using the whole-cell patch clamp method; and reveal their morphology with intracellular dyes. Recently, we have discovered a novel ganglion cell in rat retina that is intrinsically photosensitive. These ganglion cells exhibit robust light responses even when all influences from classical photoreceptors (rods and cones) are blocked, either by applying pharmacological agents or by dissociating the ganglion cell from the retina. These photosensitive ganglion cells seem likely to serve as photoreceptors for the photic synchronization of circadian rhythms, the mechanism that allows us to overcome jet lag. They project to the circadian pacemaker of the brain, the suprachiasmatic nucleus of the hypothalamus. Their temporal kinetics, threshold, dynamic range, and spectral tuning all match known properties of the synchronization or "entrainment" mechanism. These photosensitive ganglion cells innervate various other brain targets, such as the midbrain pupillary control center, and apparently contribute to a host of behavioral responses to ambient lighting conditions. These findings help to explain why circadian and pupillary light responses persist in mammals, including humans, with profound disruption of rod and cone function. Ongoing experiments are designed to elucidate the phototransduction mechanism, including the identity of the photopigment and the nature of downstream signaling pathways. In other studies, we seek to provide a more detailed characterization of the photic responsiveness and both morphological and functional evidence concerning possible interactions with conventional rod- and cone-driven retinal circuits. These studies are of potential value in understanding and designing appropriate therapies for jet lag, the negative consequences of shift work, and seasonal affective disorder.
Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems
Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.
Making a Mesh of Things: Using Network Models to Understand the Mechanics of Heterogeneous Tissues
Networks of stiff biopolymers are an omnipresent structural motif in cells and tissues. A prominent modeling framework for describing biopolymer network mechanics is rigidity percolation theory. This theory describes model networks as nodes joined by randomly placed, springlike bonds. Increasing the amount of bonds in a network results in an abrupt, dramatic increase in elastic moduli above a certain threshold – an example of a mechanical phase transition. While homogeneous networks are well studied, many tissues are made of disparate components and exhibit spatial fluctuations in the concentrations of their constituents. In this talk, I will first discuss recent work in which we explained the structural basis of the shear mechanics of healthy and chemically degraded cartilage by coupling a rigidity percolation framework with a background gel. Our model takes into account collagen concentration, as well as the concentration of peptidoglycans in the surrounding polyelectrolyte gel, to produce a structureproperty relationship that describes the shear mechanics of both sound and diseased cartilage. I will next discuss the introduction of structural correlation in constructing networks, such that sparse and dense patches emerge. I find moderate correlation allows a network to become rigid with fewer bonds, while this benefit is partly erased by excessive correlation. We explain this phenomenon through analysis of the spatial fluctuations in strained networks’ displacement fields. Finally, I will address our work’s implications for non-invasive diagnosis of pathology, as well as rational design of prostheses and novel soft materials.
How does the metabolically-expensive mammalian brain adapt to food scarcity?
Information processing is energetically expensive. In the mammalian brain, it is unclear how information coding and energy usage are regulated during food scarcity. I addressed this in the visual cortex of awake mice using whole-cell recordings and two-photon imaging to monitor layer 2/3 neuronal activity and ATP usage. I found that food restriction reduced synaptic ATP usage by 29% through a decrease in AMPA receptor conductance. Neuronal excitability was nonetheless preserved by a compensatory increase in input resistance and a depolarized resting membrane potential. Consequently, neurons spiked at similar rates as controls, but spent less ATP on underlying excitatory currents. This energy-saving strategy had a cost since it amplified the variability of visually-evoked subthreshold responses, leading to a 32% broadening in orientation tuning and impaired fine visual discrimination. This reduction in coding precision was associated with reduced levels of the fat mass-regulated hormone leptin and was restored by exogenous leptin supplementation. These findings reveal novel mechanisms that dynamically regulate energy usage and coding precision in neocortex.
Robustness in spiking networks: a geometric perspective
Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.
The GluN2A Subunit of the NMDA Receptor and Parvalbumin Interneurons: A Possible Role in Interneuron Development
N-methyl-D-aspartate receptors (NMDARs) are excitatory glutamate-gated ion channels that are expressed throughout the central nervous system. NMDARs mediate calcium entry into cells, and are involved in a host of neurological functions. The GluN2A subunit, encoded by the GRIN2A gene, is expressed by both excitatory and inhibitory neurons, with well described roles in pyramidal cells. By using Grin2a knockout mice, we show that the loss of GluN2A signaling impacts parvalbumin-positive (PV) GABAergic interneuron function in hippocampus. Grin2a knockout mice have 33% more PV cells in CA1 compared to wild type but similar cholecystokinin-positive cell density. Immunohistochemistry and electrophysiological recordings show that excess PV cells do eventually incorporate into the hippocampal network and participate in phasic inhibition. Although the morphology of Grin2a knockout PV cells is unaffected, excitability and action-potential firing properties show age-dependent alterations. Preadolescent (P20-25) PV cells have an increased input resistance, longer membrane time constant, longer action-potential half-width, a lower current threshold for depolarization-induced block of action-potential firing, and a decrease in peak action-potential firing rate. Each of these measures are corrected in adulthood, reaching wild type levels, suggesting a potential delay of electrophysiological maturation. The circuit and behavioral implications of this age-dependent PV interneuron malfunction are unknown. However, neonatal Grin2a knockout mice are more susceptible to lipopolysaccharide and febrile-induced seizures, consistent with a critical role for early GluN2A signaling in development and maintenance of excitatory-inhibitory balance. These results could provide insights into how loss-of-function GRIN2A human variants generate an epileptic phenotypes.
Neural signature for accumulated evidence underlying temporal decisions
Cognitive models of timing often include a pacemaker analogue whose ticks are accumulated to form an internal representation of time, and a threshold that determines when a target duration has elapsed. However, clear EEG manifestations of these abstract components have not yet been identified. We measured the EEG of subjects while they performed a temporal bisection task in which they were requested to categorize visual stimuli as short or long in duration. We report an ERP component whose amplitude depends monotonically on the stimulus duration. The relation of the ERP amplitude and stimulus duration can be captured by a simple model, adapted from a known drift-diffusion model for time perception. It includes a noisy accumulator that starts with the stimulus onset and a threshold. If the threshold is reached during stimulus presentation, the stimulus is categorized as "long", otherwise the stimulus is categorized as "short". At the stimulus offset, a response proportional to the distance to the threshold is emitted. This simple model has two parameters that fit both the behavior and ERP amplitudes recorded in the task. Two subsequent experiments replicate and extend this finding to another modality (touch) as well as to different time ranges (subsecond and suprasecond), establishing the described ERP component as a useful handle on the cognitive processes involved in temporal decisions.
Roles of attention and consciousness in perceptual learning
Visual perceptual learning (VPL) is defined as improved performance on a visual task due to visual experience. It was once argued that attention to a visual feature is necessary for VPL of the feature to occur. Contrary to this view, a phenomenon called task-irrelevant VPL demonstrated that VPL can occur due to exposure to a feature which is sub-threshold and task-irrelevant, and therefore, unattended. A series of findings based on task-irrelevant VPL has indicated the following two mechanisms. First, attention to a feature facilitates VPL of the feature while inhibiting VPL of unattended and supra-threshold features. Second, reward paired with a feature enables VPL of the feature irrespective of whether the feature is attended or not. However, we recently found an additional twist; VPL of a task-irrelevant and supra-threshold feature embedded in a natural scene is not subject to the inhibition of attention. This new finding suggests a need to revise the current view or add a new mechanism as to how VPL occurs.
NMC4 Short Talk: Sensory intermixing of mental imagery and perception
Several lines of research have demonstrated that internally generated sensory experience - such as during memory, dreaming and mental imagery - activates similar neural representations as externally triggered perception. This overlap raises a fundamental challenge: how is the brain able to keep apart signals reflecting imagination and reality? In a series of online psychophysics experiments combined with computational modelling, we investigated to what extent imagination and perception are confused when the same content is simultaneously imagined and perceived. We found that simultaneous congruent mental imagery consistently led to an increase in perceptual presence responses, and that congruent perceptual presence responses were in turn associated with a more vivid imagery experience. Our findings can be best explained by a simple signal detection model in which imagined and perceived signals are added together. Perceptual reality monitoring can then easily be implemented by evaluating whether this intermixed signal is strong or vivid enough to pass a ‘reality threshold’. Our model suggests that, in contrast to self-generated sensory changes during movement, our brain does not discount self-generated sensory signals during mental imagery. This has profound implications for our understanding of reality monitoring and perception in general.
Event-based Backpropagation for Exact Gradients in Spiking Neural Networks
Gradient-based optimization powered by the backpropagation algorithm proved to be the pivotal method in the training of non-spiking artificial neural networks. At the same time, spiking neural networks hold the promise for efficient processing of real-world sensory data by communicating using discrete events in continuous time. We derive the backpropagation algorithm for a recurrent network of spiking (leaky integrate-and-fire) neurons with hard thresholds and show that the backward dynamics amount to an event-based backpropagation of errors through time. Our derivation uses the jump conditions for partial derivatives at state discontinuities found by applying the implicit function theorem, allowing us to avoid approximations or substitutions. We find that the gradient exists and is finite almost everywhere in weight space, up to the null set where a membrane potential is precisely tangent to the threshold. Our presented algorithm, EventProp, computes the exact gradient with respect to a general loss function based on spike times and membrane potentials. Crucially, the algorithm allows for an event-based communication scheme in the backward phase, retaining the potential advantages of temporal sparsity afforded by spiking neural networks. We demonstrate the optimization of spiking networks using gradients computed via EventProp and the Yin-Yang and MNIST datasets with either a spike time-based or voltage-based loss function and report competitive performance. Our work supports the rigorous study of gradient-based optimization in spiking neural networks as well as the development of event-based neuromorphic architectures for the efficient training of spiking neural networks. While we consider the leaky integrate-and-fire model in this work, our methodology generalises to any neuron model defined as a hybrid dynamical system.
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.
Adaptation-driven sensory detection and sequence memory
Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds. We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus.
Neocortex saves energy by reducing coding precision during food scarcity
Information processing is energetically expensive. In the mammalian brain, it is unclear how information coding and energy usage are regulated during food scarcity. We addressed this in the visual cortex of awake mice using whole-cell patch clamp recordings and two-photon imaging to monitor layer 2/3 neuronal activity and ATP usage. We found that food restriction resulted in energy savings through a decrease in AMPA receptor conductance, reducing synaptic ATP usage by 29%. Neuronal excitability was nonetheless preserved by a compensatory increase in input resistance and a depolarized resting membrane potential. Consequently, neurons spiked at similar rates as controls, but spent less ATP on underlying excitatory currents. This energy-saving strategy had a cost since it amplified the variability of visually-evoked subthreshold responses, leading to a 32% broadening in orientation tuning and impaired fine visual discrimination. These findings reveal novel mechanisms that dynamically regulate energy usage and coding precision in neocortex.
Deciding to stop deciding: A cortical-subcortical circuit for forming and terminating a decision
The neurobiology of decision-making is informed by neurons capable of representing information over time scales of seconds. Such neurons were initially characterized in studies of spatial working memory, motor planning (e.g., Richard Andersen lab) and spatial attention. For decision-making, such neurons emit graded spike rates, that represent the accumulated evidence for or against a choice. They establish the conduit between the formation of the decision and its completion, usually in the form of a commitment to an action, even if provisional. Indeed, many decisions appear to arise through an accumulation of noisy samples of evidence to a terminating threshold, or bound. Previous studies show that single neurons in the lateral intraparietal area (LIP) represent the accumulation of evidence when monkeys make decisions about the direction of random dot motion (RDM) and express their decision with a saccade to the neuron’s preferred target. The mechanism of termination (the bound) is elusive. LIP is interconnected with other brain regions that also display decision-related activity. Whether these areas play roles in the decision process that are similar to or fundamentally different from that of LIP is unclear. I will present new unpublished experiments that begin to resolve these issues by recording from populations of neurons simultaneously in LIP and one of its primary targets, the superior colliculus (SC), while monkeys make difficult perceptual decisions.
Direction selectivity in hearing: monaural phase sensitivity in octopus neurons
The processing of temporal sound features is fundamental to hearing, and the auditory system displays a plethora of specializations, at many levels, to enable such processing. Octopus neurons are the most extreme temporally-specialized cells in the auditory (and perhaps entire) brain, which make them intriguing but also difficult to study. Notwithstanding the scant physiological data, these neurons have been a favorite cell type of modeling studies which have proposed that octopus cells have critical roles in pitch and speech perception. We used a range of in vivo recording and labeling methods to examine the hypothesis that tonotopic ordering of cochlear afferents combines with dendritic delays to compensate for cochlear delay - which would explain the highly entrained responses of octopus cells to sound transients. Unexpectedly, the experiments revealed that these neurons have marked selectivity to the direction of fast frequency glides, which is tied in a surprising way to intrinsic membrane properties and subthreshold events. The data suggest that octopus cells have a role in temporal comparisons across frequency and may play a role in auditory scene analysis.
The collective behavior of the clonal raider ant: computations, patterns, and naturalistic behavior
Colonies of ants and other eusocial insects are superorganisms, which perform sophisticated cognitive-like functions at the level of the group. In my talk I will review our efforts to establish the clonal raider ant Ooceraea biroi as a lab model system for the systematic study of the principles underlying collective information processing in ant colonies. I will use results from two separate projects to demonstrate the potential of this model system: In the first, we analyze the foraging behavior of the species, known as group raiding: a swift offensive response of a colony to the detection of a potential prey by a scout. By using automated behavioral tracking and detailed analysis we show that this behavior is closely related to the army ant mass raid, an iconic collective behavior in which hundreds of thousands of ants spontaneously leave the nest to go hunting, and that the evolutionary transition between the two can be explained by a change in colony size alone. In the second project, we study the emergence of a collective sensory response threshold in a colony. The sensory threshold is a fundamental computational primitive, observed across many biological systems. By carefully controlling the sensory environment and the social structure of the colonies we were able to show that it also appear in a collective context, and that it emerges out of a balance between excitatory and inhibitory interactions between ants. Furthermore, by using a mathematical model we predict that these two interactions can be mapped into known mechanisms of communication in ants. Finally, I will discuss the opportunities for understanding collective behavior that are opening up by the development of methods for neuroimaging and neurocontrol of our ants.
Modularity of attractors in inhibition-dominated TLNs
Threshold-linear networks (TLNs) display a wide variety of nonlinear dynamics including multistability, limit cycles, quasiperiodic attractors, and chaos. Over the past few years, we have developed a detailed mathematical theory relating stable and unstable fixed points of TLNs to graph-theoretic properties of the underlying network. In particular, we have discovered that a special type of unstable fixed points, corresponding to "core motifs," are predictive of dynamic attractors. Recently, we have used these ideas to classify dynamic attractors in a two-parameter family of inhibition-dominated TLNs spanning all 9608 directed graphs of size n=5. Remarkably, we find a striking modularity in the dynamic attractors, with identical or near-identical attractors arising in networks that are otherwise dynamically inequivalent. This suggests that, just as one can store multiple static patterns as stable fixed points in a Hopfield model, a variety of dynamic attractors can also be embedded in a TLN in a modular fashion.
The retrotrapezoid nucleus: an integrative and interoceptive hub in neural control of breathing
In this presentation, we will discuss the cellular and molecular properties of the retrotrapezoid nucleus (RTN), an integrative and interoceptive control node for the respiratory motor system. We will present the molecular profiling that has allowed definitive identification of a cluster of tonically active neurons that provide a requisite drive to the respiratory central pattern generator (CPG) and other pre-motor neurons. We will discuss the ionic basis for steady pacemaker-like firing, including by a large subthreshold oscillation; and for neuromodulatory influences on RTN activity, including by arousal state-dependent neurotransmitters and CO2/H+. The CO2/H+-dependent modulation of RTN excitability represents the sensory component of a homeostatic system by which the brain regulates breathing to maintain blood gases and tissue pH; it relies on two intrinsic molecular proton detectors, both a proton-activated G protein-coupled receptor (GPR4) and a proton-inhibited background K+ channel (TASK-2). We will also discuss downstream neurotransmitter signaling to the respiratory CPG, focusing especially on a newly-identified peptidergic modulation of the preBötzinger complex that becomes activated following birth and the initiation of air breathing. Finally, we will suggest how the cellular and molecular properties of RTN neurons identified in rodent models may contribute to understanding human respiratory disorders, such as congenital central hypoventilation syndrome (CCHS) and sudden infant death syndrome (SIDS).
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of Guided Search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. In GS6, the priority map is a dynamic attentional landscape that evolves over the course of search. In part, this is because the visual field is inhomogeneous. Part 3: That inhomogeneity imposes spatial constraints on search that described by three types of “functional visual field” (FVFs): (1) a resolution FVF, (2) an FVF governing exploratory eye movements, and (3) an FVF governing covert deployments of attention. Finally, in Part 4, we will consider that the internal representation of the search target, the “search template” is really two templates: a guiding template and a target template. Put these pieces together and you have GS6.
High precision coding in visual cortex
Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known if the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher-order visual areas and measured stimulus discrimination thresholds of 0.35 degrees and 0.37 degrees respectively in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, the behavioral variability during a sensory discrimination task could not be explained by neural variability in primary visual cortex. Instead behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that sensory perception in mice is limited by downstream decoders, not by neural noise in sensory representations.
Dynamically relevant motifs in inhibition-dominated networks
Many networks in the nervous system possess an abundance of inhibition, which serves to shape and stabilize neural dynamics. The neurons in such networks exhibit intricate patterns of connectivity whose structure controls the allowed patterns of neural activity. In this work, we examine inhibitory threshold-linear networks whose dynamics are constrained by an underlying directed graph. We develop a set of parameter-independent graph rules that enable us to predict features of the dynamics, such as emergent sequences and dynamic attractors, from properties of the graph. These rules provide a direct link between the structure and function of these networks, and may provide new insights into how connectivity shapes dynamics in real neural circuits.
A robust neural integrator based on the interactions of three time scales
Neural integrators are circuits that are able to code analog information such as spatial location or amplitude. Storing amplitude requires the network to have a large number of attractors. In classic models with recurrent excitation, such networks require very careful tuning to behave as integrators and are not robust to small mistuning of the recurrent weights. In this talk, I introduce a circuit with recurrent connectivity that is subjected to a slow subthreshold oscillation (such as the theta rhythm in the hippocampus). I show that such a network can robustly maintain many discrete attracting states. Furthermore, the firing rates of the neurons in these attracting states are much closer to those seen in recordings of animals. I show the mechanism for this can be explained by the instability regions of the Mathieu equation. I then extend the model in various ways and, for example, show that in a spatially distributed network, it is possible to code location and amplitude simultaneously. I show that the resulting mean field equations are equivalent to a certain discontinuous differential equation.
Sparks, flames, and inferno: epileptogenesis in the glioblastoma microenvironment
Glioblastoma cells trigger pharmacoresistant seizures that may promote tumor growth and diminish the quality of remaining life. To define the relationship between growth of glial tumors and their neuronal microenvironment, and to identify genomic biomarkers and mechanisms that may point to better prognosis and treatment of drug resistant epilepsy in brain cancer, we are analyzing a new generation of genetically defined CRISPR/in utero electroporation inborn glioblastoma (GBM) tumor models engineered in mice. The molecular pathophysiology of glioblastoma cells and surrounding neurons and untransformed astrocytes are compared at serial stages of tumor development. Initial studies reveal that epileptiform EEG spiking is a very early and reliable preclinical signature of GBM expansion in these mice, followed by rapidly progressive seizures and death within weeks. FACS-sorted transcriptomic analysis of cortical astrocytes reveals the expansion of a subgroup enriched in pro-synaptogenic genes that may drive hyperexcitability, a novel mechanism of epileptogenesis. Using a prototypical GBM IUE model, we systematically define and correlate the earliest appearance of cortical hyperexcitability with progressive cortical tumor cell invasion, including spontaneous episodes of spreading cortical depolarization, innate inflammation, and xCT upregulation in the peritumoral microenvironment. Blocking this glutamate exporter reduces seizure load. We show that the host genome contributes to seizure risk by generating tumors in a monogenic deletion strain (MapT/tau -/-) that raises cortical seizure threshold. We also show that the tumor variant profile determines epilepsy risk. Our genetic dissection approach sets the stage to broadly explore the developmental biology of personalized tumor/host interactions in mice engineered with novel human tumor mutations in specified glial cell lineages.
Mechanisms of Perceptual Learning
Perceptual learning (PL) is defined as long-term performance improvement on a perceptual task as a result of perceptual experience (Sasaki, Nanez& Watanabe, 2011, Nat Rev Neurosci, 2011). We first found that PL occurs for task-irrelevant and subthreshold features and that pairing task-irrelevant features with rewards is the key to form task-irrelevant PL (TIPL) (Watanabe, Nanez & Sasaki, Nature, 2001; Watanabe et al, 2002, Nature Neuroscience; Seitz & Watanabe, Nature, 2003; Seitz, Kim & Watanabe, 2009, Neuron; Shibata et al, 2011, Science). These results suggest that PL occurs as a result of interactions between reinforcement and bottom-up stimulus signals (Seitz & Watanabe, 2005, TICS). On the other hand, fMRI study results indicate that lateral prefrontal cortex fails to detect and thus to suppress subthreshold task-irrelevant signals. This leads to the paradoxical effect that a signal that is below, but close to, one’s discrimination threshold ends up being stronger than suprathreshold signals (Tsushima, Sasaki & Watanabe, 2006, Science). We confirmed this mechanism with the following results: Task-irrelevant learning occurs only when a presented feature is under and close to the threshold with younger individuals (Tsushima et al, 2009, Current Biol), whereas with older individuals who tend to have less inhibitory control task-irrelevant learning occurs with a feature whose signal is much greater than the threshold (Chang et al, 2014, Current Biol). From all of these results, we conclude that attention and reward play important but different roles in PL. I will further discuss different stages and phases in mechanisms of PL (Seitz et al, 2005, PNAS; Yotsumoto, Watanabe & Sasaki, Neuron, 2008; Yotsumoto et al, Curr Biol, 2009; Watanabe & Sasaki, 2015, Ann Rev Psychol; Shibata et al, 2017, Nat Neurosci; Tamaki et al, 2020, Nat Neurosci).
Back-propagation in spiking neural networks
Back-propagation is a powerful supervised learning algorithm in artificial neural networks, because it solves the credit assignment problem (essentially: what should the hidden layers do?). This algorithm has led to the deep learning revolution. But unfortunately, back-propagation cannot be used directly in spiking neural networks (SNN). Indeed, it requires differentiable activation functions, whereas spikes are all-or-none events which cause discontinuities. Here we present two strategies to overcome this problem. The first one is to use a so-called 'surrogate gradient', that is to approximate the derivative of the threshold function with the derivative of a sigmoid. We will present some applications of this method for time series processing (audio, internet traffic, EEG). The second one concerns a specific class of SNNs, which process static inputs using latency coding with at most one spike per neuron. Using approximations, we derived a latency-based back-propagation rule for this sort of networks, called S4NN, and applied it to image classification.
High precision coding in visual cortex
Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviourally in mice. The discrepancy between neural and behavioural discrimination could not be explained by the types of stimuli we used, by behavioural states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.
Synfire chains in random weight threshold unit network
Bernstein Conference 2024
Threshold-Linear Networks as a Ground-Zero Theory for Spiking Models
Bernstein Conference 2024
Statistics of sub-threshold voltage dynamics in cortical networks
COSYNE 2022
Statistics of sub-threshold voltage dynamics in cortical networks
COSYNE 2022
State-dependent mapping of correlations of subthreshold to spiking activity is expansive in L1 inhibitory circuits
COSYNE 2025
Low action potential firing threshold facilitates "in-out" function of fast-spiking interneurons in the human neocortex
FENS Forum 2024
Determination of the threshold plasma Aβ42/40 ratio for Alzheimer's disease diagnosis and identification of confounding factors: The role of CNS-derived EVs
FENS Forum 2024
Effect of subthreshold haptic stimulation on learning
FENS Forum 2024
Glasses to hear differently? The aftereffects of prism adaptation on auditory threshold in young and older healthy adults
FENS Forum 2024
Neurophysiological effective network connectivities determine a threshold-dependent management of working memory gating
FENS Forum 2024
Reduced pain thresholds in two Parkinson's disease mice models
FENS Forum 2024
Tactile versus electrical stimulation in a conscious somatosensory threshold detection task
FENS Forum 2024
Temporal development of spontaneous seizures and seizure threshold after neonatal asphyxia and the effect of prophylactic treatment with midazolam in rats
FENS Forum 2024
Unconventional intracellular signaling pathway underlying cholinergic muscarinic receptor-induced axonal action potential threshold plasticity in hippocampal neurons
FENS Forum 2024