Biophysical
biophysical
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time
The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.
Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties
Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.
A biologically plausible inhibitory plasticity rule for world-model learning in SNNs
Memory consolidation is the process by which recent experiences are assimilated into long-term memory. In animals, this process requires the offline replay of sequences observed during online exploration in the hippocampus. Recent experimental work has found that salient but task-irrelevant stimuli are systematically excluded from these replay epochs, suggesting that replay samples from an abstracted model of the world, rather than verbatim previous experiences. We find that this phenomenon can be explained parsimoniously and biologically plausibly by a Hebbian spike time-dependent plasticity rule at inhibitory synapses. Using spiking networks at three levels of abstraction–leaky integrate-and-fire, biophysically detailed, and abstract binary–we show that this rule enables efficient inference of a model of the structure of the world. While plasticity has previously mainly been studied at excitatory synapses, we find that plasticity at excitatory synapses alone is insufficient to accomplish this type of structural learning. We present theoretical results in a simplified model showing that in the presence of Hebbian excitatory and inhibitory plasticity, the replayed sequences form a statistical estimator of a latent sequence, which converges asymptotically to the ground truth. Our work outlines a direct link between the synaptic and cognitive levels of memory consolidation, and highlights a potential conceptually distinct role for inhibition in computing with SNNs.
Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space. Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Our results obtained in the fruit fly Drosophila demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions
Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.
Introducing dendritic computations to SNNs with Dendrify
Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Retinal responses to natural inputs
The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.
Multiscale modeling of brain states, from spiking networks to the whole brain
Modeling brain mechanisms is often confined to a given scale, such as single-cell models, network models or whole-brain models, and it is often difficult to relate these models. Here, we show an approach to build models across scales, starting from the level of circuits to the whole brain. The key is the design of accurate population models derived from biophysical models of networks of excitatory and inhibitory neurons, using mean-field techniques. Such population models can be later integrated as units in large-scale networks defining entire brain areas or the whole brain. We illustrate this approach by the simulation of asynchronous and slow-wave states, from circuits to the whole brain. At the mesoscale (millimeters), these models account for travelling activity waves in cortex, and at the macroscale (centimeters), the models reproduce the synchrony of slow waves and their responsiveness to external stimuli. This approach can also be used to evaluate the impact of sub-cellular parameters, such as receptor types or membrane conductances, on the emergent behavior at the whole-brain level. This is illustrated with simulations of the effect of anesthetics. The program codes are open source and run in open-access platforms (such as EBRAINS).
Taming chaos in neural circuits
Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.
NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule
Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.
NMC4 Short Talk: Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony
A myriad of pathological changes associated with epilepsy, including the loss of specific cell types, improper expression of individual ion channels, and synaptic sprouting, can be recast as decreases in cell and circuit heterogeneity. In recent experimental work, we demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells, and past theoretical work has shown that neuronal heterogeneity improves a neural circuit’s ability to encode information. Viewed alongside the fact that seizure is an information-poor brain state, these findings motivate the hypothesis that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to seizure onset. By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We directly implement experimentally-obtained heterogeneity levels in cortical excitatory-inhibitory (E-I) stochastic spiking network models. Low heterogeneity networks display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis. Mean-field analysis reveals a distinct mathematical structure in these networks distinguished by multi-stability. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input-output response functions explains the counter-intuitive experimentally observed reduction in single-cell excitability in epileptogenic neurons. This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in mathematically characterized effects of heterogeneity. These interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure and a new lens through which to view epilepsy, the most common serious neurological disorder in the world, that could reveal new targets for clinical treatment.
NMC4 Keynote: A network perspective on cognitive effort
Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. In this talk, I will offer a network perspective on cognitive effort. I will begin by canvassing a recent perspective that casts cognitive effort in the framework of network control theory, developed and frequently used in systems engineering. The theory describes how much energy is required to move the brain from one activity state to another, when activity is constrained to pass along physical pathways in a connectome. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task, and with a metabolic notion of energy as accessible to FDG-PET imaging. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain’s non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process occurring atop a complex network.
Generative models of brain function: Inference, networks, and mechanisms
This talk will focus on the generative modelling of resting state time series or endogenous neuronal activity. I will survey developments in modelling distributed neuronal fluctuations – spectral dynamic causal modelling (DCM) for functional MRI – and how this modelling rests upon functional connectivity. The dynamics of brain connectivity has recently attracted a lot of attention among brain mappers. I will also show a novel method to identify dynamic effective connectivity using spectral DCM. Further, I will summarise the development of the next generation of DCMs towards large-scale, whole-brain schemes which are computationally inexpensive, to the other extreme of the development using more sophisticated and biophysically detailed generative models based on the canonical microcircuits.
Self-organized formation of discrete grid cell modules from smooth gradients
Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.
Optimising spiking interneuron circuits for compartment-specific feedback
Cortical circuits process information by rich recurrent interactions between excitatory neurons and inhibitory interneurons. One of the prime functions of interneurons is to stabilize the circuit by feedback inhibition, but the level of specificity on which inhibitory feedback operates is not fully resolved. We hypothesized that inhibitory circuits could enable separate feedback control loops for different synaptic input streams, by means of specific feedback inhibition to different neuronal compartments. To investigate this hypothesis, we adopted an optimization approach. Leveraging recent advances in training spiking network models, we optimized the connectivity and short-term plasticity of interneuron circuits for compartment-specific feedback inhibition onto pyramidal neurons. Over the course of the optimization, the interneurons diversified into two classes that resembled parvalbumin (PV) and somatostatin (SST) expressing interneurons. The resulting circuit can be understood as a neural decoder that inverts the nonlinear biophysical computations performed within the pyramidal cells. Our model provides a proof of concept for studying structure-function relations in cortical circuits by a combination of gradient-based optimization and biologically plausible phenomenological models
Neural dynamics of probabilistic information processing in humans and recurrent neural networks
In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.
Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software
Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.
Mechanistic insights from a mouse model of HCN1 developmental epileptic encephalopathy
Pathogenic variants in HCN1 are associated with severe developmental and epileptic encephalopathies (DEE). We have engineered the Hcn1 M294L heterozygous knock-in (Hcn1M294L) mouse which is a homolog of the de novo HCN1 M305L recurrent pathogenic variant. The mouse recapitulates the phenotypic features of patients including having spontaneous seizures and a learning deficit. In this talk I will present experimental work that probes the molecular and cellular mechanisms underlying hyper-excitability in the mouse model. This will include testing the efficacy of currently available antiepileptic drugs and a novel precision medicine approach. I will also briefly touch on how disease biology can give insights into the biophysical properties of HCN channels.
Disinhibitory and neuromodulatory regulation of hippocampal synaptic plasticity
The CA1 pyramidal neurons are embedded in an intricate local circuitry that contains a variety of interneurons. The roles these interneurons play in the regulation of the excitatory synaptic plasticity remains largely understudied. Recent experiments showed that repeated cholinergic activation of 𝛼7 nACh receptors expressed in oriens-lacunosum-moleculare (OLM𝛼2) interneurons could induce LTP in SC-CA1 synapses. We used a biophysically realistic computational model to examine mechanistically how cholinergic activation of OLMa2 interneurons increases SC to CA1 transmission. Our results suggest that, when properly timed, activation of OLMa2 interneurons cancels the feedforward inhibition onto CA1 pyramidal cells by inhibiting fast-spiking interneurons that synapse on the same dendritic compartment as the SC, i.e., by disinhibiting the pyramidal cell dendritic compartment. Our work further describes the pairing of disinhibition with SC stimulation as a general mechanism for the induction of synaptic plasticity. We found that locally-reduced GABA release (disinhibition) paired with SC stimulation could lead to increased NMDAR activation and intracellular calcium concentration sufficient to upregulate AMPAR permeability and potentiate the excitatory synapse. Our work suggests that inhibitory synapses critically modulate excitatory neurotransmission and induction of plasticity at excitatory synapses. Our work also shows how cholinergic action on OLM interneurons, a mechanism whose disruption is associated with memory impairment, can down-regulate the GABAergic signaling into CA1 pyramidal cells and facilitate potentiation of the SC-CA1 synapse.
3D Printing Cellular Communities: Mammalian Cells, Bacteria, And Beyond
While the motion and collective behavior of cells are well-studied on flat surfaces or in unconfined liquid media, in most natural settings, cells thrive in complex 3D environments. Bioprinting processes are capable of structuring cells in 3D and conventional bioprinting approaches address this challenge by embedding cells in bio-degradable polymer networks. However, heterogeneity in network structure and biodegradation often preclude quantitative studies of cell behavior in specified 3D architectures. Here, I will present a new approach to 3D bioprinting of cellular communities that utilizes jammed, granular polyelectrolyte microgels as a support medium. The self-healing nature of this medium allows the creation of highly precise cellular communities and tissue-like structures by direct injection of cells inside the 3D medium. Further, the transparent nature of this medium enables precise characterization of cellular behavior. I will describe two examples of my work using this platform to study the behavior of two different classes of cells in 3D. First, I will describe how we interrogate the growth, viability, and migration of mammalian cells—ranging from epithelial cells, cancer cells, and T cells—in the 3D pore space. Second, I will describe how we interrogate the migration of E. coli bacteria through the 3D pore space. Direct visualization enables us to reveal a new mode of motility exhibited by individual cells, in stark contrast to the paradigm of run-and-tumble motility, in which cells are intermittently and transiently trapped as they navigate the pore space; further, analysis of these dynamics enables prediction of single-cell transport over large length and time scales. Moreover, we show that concentrated populations of E. coli can collectively migrate through a porous medium—despite being strongly confined—by chemotactically “surfing” a self-generated nutrient gradient. Together, these studies highlight how the jammed microgel medium provides a powerful platform to design and interrogate complex cellular communities in 3D—with implications for tissue engineering, microtissue mechanics, studies of cellular interactions, and biophysical studies of active matter.
Fragility of the human connectome across the lifespan
The human brain network architecture can reveal crucial aspects of brain function and dysfunction. The topology of this network (known as the connectome) is shaped by a trade-off between wiring cost and network efficiency, and it has highly connected hub regions playing a prominent role in many brain disorders. By studying a landscape of plausible brain networks that preserve the wiring cost, fragile and resilient hubs can be identified. In this webinar, Dr Leonardo Gollo and Dr James Pang from Monash University will discuss this approach across the lifespan and some of its implications for neurodevelopmental and neurodegenerative diseases. Dr Leonardo Gollo is a Senior Research Fellow at the Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University. He holds an ARC Future Fellowship and his research interests include brain modelling, systems neuroscience, and connectomics. Dr James Pang is a Research Fellow at the Turner Institute for Brain and Mental Health, School of Psychological Sciences, Monash University. His research interests are on combining neuroimaging and biophysical modelling to better understand the mechanisms of brain function in health and disease.
A macaque connectome for simulating large-scale network dynamics in The VirtualBrain
TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.
Synchrony and Synaptic Signaling in Cerebellar Circuits
The cerebellum permits a wide range of behaviors that involve sensorimotor integration. We have been investigating how specific cellular and synaptic specializations of cerebellar neurons measured in vitro, give rise to circuit activity in vivo. We have investigated these issues by studying Purkinje neurons as well as the large neurons of the mouse cerebellar nuclei, which form the major excitatory premotor projection from the cerebellum. Large CbN cells have ion channels that favor spontaneous action potential firing and GABAA receptors that generate ultra-fast inhibitory synaptic currents, raising the possibility that these biophysical attributes may permit CbN cells to respond differently to the degree of temporal coherence of their Purkinje cell inputs. In vivo, self-initiated motor programs associated with whisking correlates with asynchronous changes in Purkinje cell simple spiking that are asynchronous across the population. The resulting inhibition converges with mossy fiber excitation to yield little change in CbN cell firing, such that cerebellar output is low or cancelled. In contrast, externally applied sensory stimuli elicits a transient, synchronous inhibition of Purkinje cell simple spiking. During the resulting strong disinhibition of CbN cells, sensory-induced excitation from mossy fibers effectively drives cerebellar outputs that increase the magnitude of reflexive whisking. Purkinje cell synchrony, therefore, may be a key variable contributing to the “positive effort” hypothesized by David Marr in 1969 to be necessary for cerebellar control of movement.
“Biophysics of Structural Plasticity in Postsynaptic Spines”
The ability of the brain to encode and store information depends on the plastic nature of the individual synapses. The increase and decrease in synaptic strength, mediated through the structural plasticity of the spine, are important for learning, memory, and cognitive function. Dendritic spines are small structures that contain the synapse. They come in a variety of shapes (stubby, thin, or mushroom-shaped) and a wide range of sizes that protrude from the dendrite. These spines are the regions where the postsynaptic biochemical machinery responds to the neurotransmitters. Spines are dynamic structures, changing in size, shape, and number during development and aging. While spines and synapses have inspired neuromorphic engineering, the biophysical events underlying synaptic and structural plasticity of single spines remain poorly understood. Our current focus is on understanding the biophysical events underlying structural plasticity. I will discuss recent efforts from my group — first, a systems biology approach to construct a mathematical model of biochemical signaling and actin-mediated transient spine expansion in response to calcium influx caused by NMDA receptor activation and a series of spatial models to study the role of spine geometry and organelle location within the spine for calcium and cyclic AMP signaling. Second, I will discuss how mechanics of membrane-cytoskeleton interactions can give insight into spine shape region. And I will conclude with some new efforts in using reconstructions from electron microscopy to inform computational domains. I will conclude with how geometry and mechanics plays an important role in our understanding of fundamental biological phenomena and some general ideas on bio-inspired engineering.
K+ Channel Gain of Function in Epilepsy, from Currents to Networks
Recent human gene discovery efforts show that gain-of-function (GOF) variants in the KCNT1gene, which encodes a Na+-activated K+ channel subunit, cause severe epilepsies and other neurodevelopmental disorders. Although the impact of these variants on the biophysical properties of the channels is well characterized, the mechanisms that link channel dysfunction to cellular and network hyperexcitability and human disease are unknown. Furthermore, precision therapies that correct channel biophysics in non-neuronal cells have had limited success in treating human disease, highlighting the need for a deeper understanding of how these variants affect neurons and networks. To address this gap, we developed a new mouse model with a pathogenic human variant knocked into the mouse Kcnt1gene. I will discuss our findings on the in vivo phenotypes of this mouse, focusing on our characterization of epileptiform neural activity using electrophysiology and widefield Ca++imaging. I will also talk about our investigations at the synaptic, cellular, and circuit levels, including the main finding that cortical inhibitory neurons in this model show a reduction in intrinsic excitability and action potential generation. Finally, I will discuss future directions to better understand the mechanisms underlying the cell-type specific effects, as well as the link between the cellular and network level effects of KCNT1 GOF.
Towards multipurpose biophysics-based mathematical models of cortical circuits
Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.
Motility-dependent pathogenicity of a spirochetal bacterium
Motility is a crucial virulence factor for many species of bacteria, but it is not fully understood how bacterial motility is practically involved in pathogenicity. This time I will give a talk on the association of motility with pathogenicity in the zoonotic spirochete bacterium Leptospira. Recently, we measured swimming force of individual leptospires using optical tweezers and found that they can generate ~30 times of the swimming force of E. coli. We also observed that leptospires increase the reversal frequency of swimming at the gel-liquid interface, resembling host dermis exposed to contaminated water (Abe et al., 2020, Sci Rep). These could be involved in percutaneous infection of the spirochete. We have shown that Leptospira not only swims in liquid but also moves over solid surfaces (Tahara et al., 2018, Sci Adv). We quantified the surface motility called “crawling” on cultured kidney tissues from various mammals, showing that pathogenic leptospires crawl over the tissue surfaces more persistently that non-pathogenic ones (Xu et al., 2020, Front Microbiol). I will discuss the spirochete motility related to pathogenicity from the biophysical viewpoint.
Sensing Light for Sight and Physiological Control
Organisms sense light for purposes that range from recognizing objects to synchronizing activity with environmental cycles. What mechanisms serve these diverse tasks? This seminar will examine the specializations of two cell types. First are the foveal cone photoreceptors. These neurons are used by primates to see far greater detail than other mammals, which lack them. How do the biophysical properties of foveal cones support high-acuity vision? Second are the melanopsin retinal ganglion cells, which are conserved among mammals and essential for processes that include regulation of the circadian clock, sleep, and hormone levels. How do these neurons encode light, and is encoding customized for animals of different niches? In pursuing these questions, a broad goal is to learn how various levels of biological organization are shaped to behavioural needs.
Untitled Seminar
The symposium provides an opportunity for ECRs working in biophysical research to get together and to share their research. Although the symposium is primarily aimed at ECRs, we welcome everyone with an interest in biophysical sciences to join in the lively discussions and questions. This half day symposium will feature short talks and flash-talks from a range of ECRs around the biophysics theme. Afterwards there will be a virtual poster session with open discussions. We warmly invite both domestic and international ECRs to present at/attend this event.
Dynamics of microbiota communities during physical perturbation
The consortium of microbes living in and on our bodies is intimately connected with human biology and deeply influenced by physical forces. Despite incredible gains in describing this community, and emerging knowledge of the mechanisms linking it to human health, understanding the basic physical properties and responses of this ecosystem has been comparatively neglected. Most diseases have significant physical effects on the gut; diarrhea alters osmolality, fever and cancer increase temperature, and bowel diseases affect pH. Furthermore, the gut itself is comprised of localized niches that differ significantly in their physical environment, and are inhabited by different commensal microbes. Understanding the impact of common physical factors is necessary for engineering robust microbiota members and communities; however, our knowledge of how they affect the gut ecosystem is poor. We are investigating how changes in osmolality affect the host and the microbial community and lead to mechanical shifts in the cellular environment. Osmotic perturbation is extremely prevalent in humans, caused by the use of laxatives, lactose intolerance, or celiac disease. In our studies we monitored osmotic shock to the microbiota using a comprehensive and novel approach, which combined in vivo experiments to imaging, physical measurements, computational analysis and highly controlled microfluidic experiments. By bridging several disciplines, we developed a mechanistic understanding of the processes involved in osmotic diarrhea, linking single-cell biophysical changes to large-scale community dynamics. Our results indicate that physical perturbations can profoundly and permanently change the competitive and ecological landscape of the gut, and affect the cell wall of bacteria differentially, depending on their mechanical characteristics.
Chromatin transcription: cryo-EM structures of Pol II-nucleosome and nucleosome-CHD complexes
Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus
The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates. " "The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown." "Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.
Towards hybrid models of retinal circuits - integrating biophysical realism, anatomical constraints and predictive performance
Visual processing in the retina has been studied in great detail at all levels such that a comprehensive picture of the retina's cell types and the many neural circuits they form is emerging. However, the currently best performing models of retinal function are black-box CNN models which are agnostic to such biological knowledge. Here, I present two of our recent attempts to develop computational models of processing in the inner retina, which both respect biophysical and anatomical constraints yet provide accurate predictions of retinal activity
How the brain comes to balance: Development of postural stability and its neural architecture in larval zebrafish
Maintaining posture is a vital challenge for all freely-moving organisms. As animals grow, their relationship to destabilizing physical forces changes. How does the nervous system deal with this ongoing challenge? Vertebrates use highly conserved vestibular reflexes to stabilize the body. We established the larval zebrafish as a new model system to understand the development of the vestibular reflexes responsible for balance. In this talk, I will begin with the biophysical challenges facing baby fish as they learn to swim. I’ll briefly review published work by David Ehrlich, Ph.D., establishing a fundamental relationship between postural stability and locomotion. The bulk of the talk will highlight unpublished work by Kyla Hamling. She discovered that a small (~50) population of molecularly-defined brainstem neurons called vestibulo-spinal cells act as a nexus for postural development. Her loss-of-function experiments show that these neurons contribute more to postural stability as animals grow older. I’ll end with brief highlights from her ongoing work examining tilt-evoked responses of these neurons using 2-photon imaging and the consequences of downstream activity in the spinal cord using single-objective light-sheet (SCAPE) microscopy
Mean-field models for finite-size populations of spiking neurons
Firing-rate (FR) or neural-mass models are widely used for studying computations performed by neural populations. Despite their success, classical firing-rate models do not capture spike timing effects on the microscopic level such as spike synchronization and are difficult to link to spiking data in experimental recordings. For large neuronal populations, the gap between the spiking neuron dynamics on the microscopic level and coarse-grained FR models on the population level can be bridged by mean-field theory formally valid for infinitely many neurons. It remains however challenging to extend the resulting mean-field models to finite-size populations with biologically realistic neuron numbers per cell type (mesoscopic scale). In this talk, I present a mathematical framework for mesoscopic populations of generalized integrate-and-fire neuron models that accounts for fluctuations caused by the finite number of neurons. To this end, I will introduce the refractory density method for quasi-renewal processes and show how this method can be generalized to finite-size populations. To demonstrate the flexibility of this approach, I will show how synaptic short-term plasticity can be incorporated in the mesoscopic mean-field framework. On the other hand, the framework permits a systematic reduction to low-dimensional FR equations using the eigenfunction method. Our modeling framework enables a re-examination of classical FR models in computational neuroscience under biophysically more realistic conditions.
Spanning the arc between optimality theories and data
Ideas about optimization are at the core of how we approach biological complexity. Quantitative predictions about biological systems have been successfully derived from first principles in the context of efficient coding, metabolic and transport networks, evolution, reinforcement learning, and decision making, by postulating that a system has evolved to optimize some utility function under biophysical constraints. Yet as normative theories become increasingly high-dimensional and optimal solutions stop being unique, it gets progressively hard to judge whether theoretical predictions are consistent with, or "close to", data. I will illustrate these issues using efficient coding applied to simple neuronal models as well as to a complex and realistic biochemical reaction network. As a solution, we developed a statistical framework which smoothly interpolates between ab initio optimality predictions and Bayesian parameter inference from data, while also permitting statistically rigorous tests of optimality hypotheses.
Algorithms and circuits for olfactory navigation in walking Drosophila
Olfactory navigation provides a tractable model for studying the circuit basis of sensori-motor transformations and goal-directed behaviour. Macroscopic organisms typically navigate in odor plumes that provide a noisy and uncertain signal about the location of an odor source. Work in many species has suggested that animals accomplish this task by combining temporal processing of dynamic odor information with an estimate of wind direction. Our lab has been using adult walking Drosophila to understand both the computational algorithms and the neural circuits that support navigation in a plume of attractive food odor. We developed a high-throughput paradigm to study behavioural responses to temporally-controlled odor and wind stimuli. Using this paradigm we found that flies respond to a food odor (apple cider vinegar) with two behaviours: during the odor they run upwind, while after odor loss they perform a local search. A simple computational model based one these two responses is sufficient to replicate many aspects of fly behaviour in a natural turbulent plume. In on-going work, we are seeking to identify the neural circuits and biophysical mechanisms that perform the computations delineated by our model. Using electrophysiology, we have identified mechanosensory neurons that compute wind direction from movements of the two antennae and central mechanosensory neurons that encode wind direction are are involved in generating a stable downwind orientation. Using optogenetic activation, we have traced olfactory circuits capable of evoking upwind orientation and offset search from the periphery, through the mushroom body and lateral horn, to the central complex. Finally, we have used optogenetic activation, in combination with molecular manipulation of specific synapses, to localize temporal computations performed on the odor signal to olfactory transduction and transmission at specific synapses. Our work illustrates how the tools available in fruit fly can be applied to dissect the mechanisms underlying a complex goal-directed behaviour.
Diverse synaptic mechanisms underlie visual signaling in the retina
Our laboratory seeks to understand how neural circuits receive, compute, encode and transmit information. More specifically, we’d like to learn what biophysical and morphological features equip synapses, neurons and networks to perform these tasks. The retina is a model system for the study of neuronal information processing: We can deliver precisely defined physiological stimuli and record responses from many different cell types at various points within the network; in addition, retinal circuitry is particularly well understood, enabling us to interpret more directly the impact of synaptic and cellular mechanisms on circuit function. I will present recent experiments in the lab that exploit these advantages to examine how synapses and neurons within retinal amacrine cell circuits perform specific visual computations.
Adolescent maturation of cortical excitation-inhibition balance based on individualized biophysical network modeling
Bernstein Conference 2024
cuBNM: GPU-Accelerated Biophysical Network Modeling
Bernstein Conference 2024
Sequence learning under biophysical constraints: a re-evaluation of prominent models
Bernstein Conference 2024
Toward a biophysically-detailed, fully-differentiable model of the mouse retina
Bernstein Conference 2024
A biophysically detailed model of retinal degeneration
COSYNE 2022
A biophysical account of multiplication by a single neuron
COSYNE 2022
A biophysical counting mechanism for keeping time
COSYNE 2022
A Biophysical Mechanism for Changing the Threat Sensitivity of Escape Behavior
COSYNE 2023
A biophysically detailed model of retinal degeneration
COSYNE 2023
The thermal adjustment used in neuronal biophysical models is wrong: Here is how to fix it
COSYNE 2023
A mechanism for selective attention in biophysically realistic Daleian spiking neural networks
COSYNE 2025
Predictive coding in a biophysically detailed Continuous attractor model of grid cells
COSYNE 2025
The biophysical mechanism underlying epigenetically inherited stress response/unpredictability learning
FENS Forum 2024
Biophysical basis of ultrafast population encoding
FENS Forum 2024
A biophysical mechanism for changing the threat sensitivity of escape behaviour
FENS Forum 2024
Biophysically detailed cortical neuron models with genetically-defined ion channels
FENS Forum 2024
Comparative analysis of biophysical properties of ON-alpha sustained RGCs in wild-type and rd10 retina
FENS Forum 2024
Controlling morpho-electrophysiological variability of neurons with detailed biophysical models
FENS Forum 2024
Estimation of neuronal biophysical parameters in the presence of experimental noise using computer simulations and probabilistic inference methods
FENS Forum 2024
Exploring biophysical and biochemical mechanisms of neuron-astrocyte network models
FENS Forum 2024
Impact of inter-areal connectivity on sensory processing in a biophysically-detailed model of two interacting cortical areas
FENS Forum 2024
Intrinsic biophysical properties and extrinsic spatial experience collaboratively prime CA1 pyramidal cells to replay during sharp-wave ripples
FENS Forum 2024
Signal integration and competition in a biophysical model of the substantia nigra pars reticulata
FENS Forum 2024