Simulations
simulations
Prof. Jakob Macke
The Mackelab (Prof. Jakob Macke, University Tübingen) is looking for PhD, Postdoc and Scientific Programmer applicants interested in working with us on using deep learning to build, optimize and study mechanistic models of neural computations! In a first project, funded by the ERC Grant DeepCoMechTome, we want to make use of connectomic reconstructions of the fruit fly to build large-scale simulations of the fly brain that can explain visually driven behavior—see, e.g., our prior work with Srinivas Turaga’s group, described in Lappalainen et al., Nature, 2024. In a second project, funded by the DFG through the CRC Robust Vision, we want to use differentiable simulators of biophysical models (Deistler et al., 2024) to build data-driven models of visual processing in the retina. We are open to candidates who are more interested in neurobiological questions, as well as to ones more interested in machine learning aspects (e.g. training large-scale mechanistic neural networks, learning efficient emulators, coding frameworks for collaborative modelling, automated model discovery for mechanistic models, …) of these projects.
AutoMIND: Deep inverse models for revealing neural circuit invariances
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making
How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.
Movement planning as a window into hierarchical motor control
The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.
Euclidean coordinates are the wrong prior for primate vision
The mapping from the visual field to V1 can be approximated by a log-polar transform. In this domain, scale is a left-right shift, and rotation is an up-down shift. When fed into a standard shift-invariant convolutional network, this provides scale and rotation invariance. However, translation invariance is lost. In our model, this is compensated for by multiple fixations on an object. Due to the high concentration of cones in the fovea with the dropoff of resolution in the periphery, fully 10 degrees of visual angle take up about half of V1, with the remaining 170 degrees (or so) taking up the other half. This layout provides the basis for the central and peripheral pathways. Simulations with this model closely match human performance in scene classification, and competition between the pathways leads to the peripheral pathway being used for this task. Remarkably, in spite of the property of rotation invariance, this model can explain the inverted face effect. We suggest that the standard method of using image coordinates is the wrong prior for models of primate vision.
Quasicriticality and the quest for a framework of neuronal dynamics
Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.
A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures
The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.
Geometry of concept learning
Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.
Network inference via process motifs for lagged correlation in linear stochastic processes
A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.
Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks
Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
Introducing dendritic computations to SNNs with Dendrify
Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Optimal information loading into working memory in prefrontal cortex
Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal information loading involves inputs that are largely orthogonal, rather than similar, to the persistent activities observed during memory maintenance. Using a novel, theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading and we find that such dynamics emerge naturally as a dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics, and reveals a normative principle underlying the widely observed phenomenon of dynamic coding in PFC.
The Problem of Testimony
The talk will detail work drawing on behavioural results, formal analysis, and computational modelling with agent-based simulations to unpack the scale of the challenge humans face when trying to work out and factor in the reliability of their sources. In particular, it is shown how and why this task admits of no easy solution in the context of wider communication networks, and how this will affect the accuracy of our beliefs. The implications of this for the shift in the size and topology of our communication networks through the uncontrolled rise of social media are discussed.
Non-regular behavior during the coalescence of liquid-like cellular aggregates
The fusion of cell aggregates widely exists during biological processes such as development, tissue regeneration, and tumor invasion. Cellular spheroids (spherical cell aggregates) are commonly used to study this phenomenon. In previous studies, with approximated assumptions and measurements, researchers found that the fusion of two spheroids with some cell type is similar to the coalescence of two liquid droplets. However, with more accurate measurements focusing on the overall shape evolution in this process, we find that even in the previously-regarded liquid-like regime, the fusion process of spheroids can be very different from regular liquid coalescence. We conduct numerical simulations using both standard particulate models and vertex models with both Molecular Dynamics and Brownian Dynamics. The simulation results show that the difference between spheroids and regular liquid droplets is caused by the microscopic overdamped dynamics of each cell rather than the topological cell-cell interactions in the vertex model. Our research reveals the necessity of a new continuum theory for “liquid” with microscopically overdamped components, such as cellular and colloidal systems. Detailed analysis of our simulation results of different system sizes provides the basis for developing the new theory.
Multiscale modeling of brain states, from spiking networks to the whole brain
Modeling brain mechanisms is often confined to a given scale, such as single-cell models, network models or whole-brain models, and it is often difficult to relate these models. Here, we show an approach to build models across scales, starting from the level of circuits to the whole brain. The key is the design of accurate population models derived from biophysical models of networks of excitatory and inhibitory neurons, using mean-field techniques. Such population models can be later integrated as units in large-scale networks defining entire brain areas or the whole brain. We illustrate this approach by the simulation of asynchronous and slow-wave states, from circuits to the whole brain. At the mesoscale (millimeters), these models account for travelling activity waves in cortex, and at the macroscale (centimeters), the models reproduce the synchrony of slow waves and their responsiveness to external stimuli. This approach can also be used to evaluate the impact of sub-cellular parameters, such as receptor types or membrane conductances, on the emergent behavior at the whole-brain level. This is illustrated with simulations of the effect of anesthetics. The program codes are open source and run in open-access platforms (such as EBRAINS).
Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations
To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Cognitive Maps
Ample evidence suggests that the brain generates internal simulations of the outside world to guide our thoughts and actions. These mental representations, or cognitive maps, are thought to be essential for our very comprehension of reality. I will discuss what is known about the informational structure of cognitive maps, their neural underpinnings, and how they relate to behavior, evolution, disease, and the current revolution in artificial intelligence.
NaV Long-term Inactivation Regulates Adaptation in Place Cells and Depolarization Block in Dopamine Neurons
In behaving rodents, CA1 pyramidal neurons receive spatially-tuned depolarizing synaptic input while traversing a specific location within an environment called its place. Midbrain dopamine neurons participate in reinforcement learning, and bursts of action potentials riding a depolarizing wave of synaptic input signal rewards and reward expectation. Interestingly, slice electrophysiology in vitro shows that both types of cells exhibit a pronounced reduction in firing rate (adaptation) and even cessation of firing during sustained depolarization. We included a five state Markov model of NaV1.6 (for CA1) and NaV1.2 (for dopamine neurons) respectively, in computational models of these two types of neurons. Our simulations suggest that long-term inactivation of this channel is responsible for the adaptation in CA1 pyramidal neurons, in response to triangular depolarizing current ramps. We also show that the differential contribution of slow inactivation in two subpopulations of midbrain dopamine neurons can account for their different dynamic ranges, as assessed by their responses to similar depolarizing ramps. These results suggest long-term inactivation of the sodium channel is a general mechanism for adaptation.
NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule
Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.
NMC4 Short Talk: A mechanism for inter-areal coherence through communication based on connectivity and oscillatory power
Inter-areal coherence between cortical field-potentials is a widespread phenomenon and depends on numerous behavioral and cognitive factors. It has been hypothesized that inter-areal coherence reflects phase-synchronization between local oscillations and flexibly gates communication. We reveal an alternative mechanism, where coherence results from and is not the cause of communication, and naturally emerges as a consequence of the fact that spiking activity in a sending area causes post-synaptic inputs both in the same area and in other areas. Consequently, coherence depends in a lawful manner on oscillatory power and phase-locking in a sending area and inter-areal connectivity. We show that changes in oscillatory power explain prominent changes in fronto-parietal beta-coherence with movement and memory, and LGN-V1 gamma-coherence with arousal and visual stimulation. Optogenetic silencing of a receiving area and E/I network simulations demonstrate that afferent synaptic inputs rather than spiking entrainment are the main determinant of inter-areal coherence. These findings suggest that the unique spectral profiles of different brain areas automatically give rise to large-scale inter-areal coherence patterns that follow anatomical connectivity and continuously reconfigure as a function of behavior and cognition.
The wonders and complexities of brain microstructure: Enabling biomedical engineering studies combining imaging and models
Brain microstructure plays a key role in driving the transport of drug molecules directly administered to the brain tissue as in Convection-Enhanced Delivery procedures. This study reports the first systematic attempt to characterize the cytoarchitecture of commissural, long association and projection fiber, namely: the corpus callosum, the fornix and the corona radiata. Ovine samples from three different subjects have been imaged using scanning electron microscope combined with focused ion beam milling. Particular focus has been given to the axons. For each tract, a 3D reconstruction of relatively large volumes (including a significant number of axons) has been performed. Namely, outer axonal ellipticity, outer axonal cross-sectional area and its relative perimeter have been measured. This study [1] provides useful insight into the fibrous organization of the tissue that can be described as composite material presenting elliptical tortuous tubular fibers, leading to a workflow to enable accurate simulations of drug delivery which include well-resolved microstructural features. As a demonstration of the use of these imaging and reconstruction techniques, our research analyses the hydraulic permeability of two white matter (WM) areas (corpus callosum and fornix) whose three-dimensional microstructure was reconstructed starting from the acquisition of the electron microscopy images. Considering that the white matter structure is mainly composed of elongated and parallel axons we computed the permeability along the parallel and perpendicular directions using computational fluid dynamics [2]. The results show a statistically significant difference between parallel and perpendicular permeability, with a ratio about 2 in both the white matter structures analysed, thus demonstrating their anisotropic behaviour. This is in line with the experimental results obtained using perfusion of brain matter [3]. Moreover, we find a significant difference between permeability in corpus callosum and fornix, which suggests that also the white matter heterogeneity should be considered when modelling drug transport in the brain. Our findings, that demonstrate and quantify the anisotropic and heterogeneous character of the white matter, represent a fundamental contribution not only for drug delivery modelling but also for shedding light on the interstitial transport mechanisms in the extracellular space. These and many other discoveries will be discussed during the talk." "1. https://www.researchsquare.com/article/rs-686577/v1, 2. https://www.pnas.org/content/118/36/e2105328118, 3. https://ieeexplore.ieee.org/abstract/document/9198110
Networking—the key to success… especially in the brain
In our everyday lives, we form connections and build up social networks that allow us to function successfully as individuals and as a society. Our social networks tend to include well-connected individuals who link us to other groups of people that we might otherwise have limited access to. In addition, we are more likely to befriend individuals who a) live nearby and b) have mutual friends. Interestingly, neurons tend to do the same…until development is perturbed. Just like social networks, neuronal networks require highly connected hubs to elicit efficient communication at minimal cost (you can’t befriend everybody you meet, nor can every neuron wire with every other!). This talk will cover some of Alex’s work showing that microscopic (cellular scale) brain networks inferred from spontaneous activity show similar complex topology to that previously described in macroscopic human brain scans. The talk will also discuss what happens when neurodevelopment is disrupted in the case of a monogenic disorder called Rett Syndrome. This will include simulations of neuronal activity and the effects of manipulation of model parameters as well as what happens when we manipulate real developing networks using optogenetics. If functional development can be restored in atypical networks, this may have implications for treatment of neurodevelopmental disorders like Rett Syndrome.
Understanding the Invisibility of Scotomas: Novel Simulations
Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models
The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.
Deriving local synaptic learning rules for efficient representations in networks of spiking neurons
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under specific requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs. It also works in the presence of inhibitory transmission delays, where Hebbian-like plasticity typically fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.
Neuropunk revolution and its implementation via real-time neurosimulations and their integrations
In this talk I present the perspectives of the "neuropunk revolution'' technologies. One could understand the "neuropunk revolution'' as the integration of real-time neurosimulations into biological nervous/motor systems via neurostimulation or artificial robotic systems via integration with actuators. I see the added value of the real-time neurosimulations as bridge technology for the set of developed technologies: BCI, neuroprosthetics, AI, robotics to provide bio-compatible integration into biological or artificial limbs. Here I present the three types of integration of the "neuropunk revolution'' technologies as inbound, outbound and closed-loop in-outbound systems. I see the shift of the perspective of how we see now the set of technologies including AI, BCI, neuroprosthetics and robotics due to the proposed concept for example the integration of external to a body simulated part of the nervous system back into the biological nervous system or muscles.
Beyond the binding problem: From basic affordances to symbolic thought
Human cognitive abilities seem qualitatively different from the cognitive abilities of other primates, a difference Penn, Holyoak, and Povinelli (2008) attribute to role-based relational reasoning—inferences and generalizations based on the relational roles to which objects (and other relations) are bound, rather than just the features of the objects themselves. Role-based relational reasoning depends on the ability to dynamically bind arguments to relational roles. But dynamic binding cannot be sufficient for relational thinking: Some non-human animals solve the dynamic binding problem, at least in some domains; and many non-human species generalize affordances to completely novel objects and scenes, a kind of universal generalization that likely depends on dynamic binding. If they can solve the dynamic binding problem, then why can they not reason about relations? What are they missing? I will present simulations with the LISA model of analogical reasoning (Hummel & Holyoak, 1997, 2003) suggesting that the missing pieces are multi-role integration (the capacity to combine multiple role bindings into complete relations) and structure mapping (the capacity to map different systems of role bindings onto one another). When LISA is deprived of either of these capacities, it can still generalize affordances universally, but it cannot reason symbolically; granted both abilities, LISA enjoys the full power of relational (symbolic) thought. I speculate that one reason it may have taken relational reasoning so long to evolve is that it required evolution to solve both problems simultaneously, since neither multi-role integration nor structure mapping appears to confer any adaptive advantage over simple role binding on its own.
How polymer-loop-extruding motors shape chromosomes
Chromosomes are extremely long, active polymers that are spatially organized across multiple scales to promote cellular functions, such as gene transcription and genetic inheritance. During each cell cycle, chromosomes are dramatically compacted as cells divide and dynamically reorganized into less compact, spatiotemporally patterned structures after cell division. These activities are facilitated by DNA/chromatin-binding protein motors called SMC complexes. Each of these motors can perform a unique activity known as “loop extrusion,” in which the motor binds the DNA/chromatin polymer, reels in the polymer fiber, and extrudes it as a loop. Using simulations and theory, I show how loop-extruding motors can collectively compact and spatially organize chromosomes in different scenarios. First, I show that loop-extruding complexes can generate sufficient compaction for cell division, provided that loop-extrusion satisfies stringent physical requirements. Second, while loop-extrusion alone does not uniquely spatially pattern the genome, interactions between SMC complexes and protein “boundary elements” can generate patterns that emerge in the genome after cell division. Intriguingly, these “boundary elements” are not necessarily stationary, which can generate a variety of patterns in the neighborhood of transcriptionally active genes. These predictions, along with supporting experiments, show how SMC complexes and other molecular machinery, such as RNA polymerase, can spatially organize the genome. More generally, this work demonstrates both the versatility of the loop extrusion mechanism for chromosome functional organization and how seemingly subtle microscopic effects can emerge in the spatiotemporal structure of nonequilibrium polymers.
Coordinated motion of active filaments on spherical surfaces
Filaments (slender, microscopic elastic bodies) are prevalent in biological and industrial settings. In the biological case, the filaments are often active, in that they are driven internally by motor proteins, with the prime examples being cilia and flagella. For cilia in particular, which can appear in dense arrays, their resulting motions are coupled through the surrounding fluid, as well as through surfaces to which they are attached. In this talk, I present numerical simulations exploring the coordinated motion of active filaments and how it depends on the driving force, density of filaments, as well as the attached surface. In particular, we find that when the surface is spherical, its topology introduces local defects in coordinated motion which can then feedback and alter the global state. This is particularly true when the surface is not held fixed and is free to move in the surrounding fluid. These simulations take advantage of a computational framework we developed for fully 3D filament motion that combines unit quaternions, implicit geometric time integration, quasi-Newton methods, and fast, matrix-free methods for hydrodynamic interactions and it will also be presented.
An in-silico framework to study the cholinergic modulation of the neocortex
Neuromodulators control information processing in cortical microcircuits by regulating the cellular and synaptic physiology of neurons. Computational models and detailed simulations of neocortical microcircuitry offer a unifying framework to analyze the role of neuromodulators on network activity. In the present study, to get a deeper insight in the organization of the cortical neuropil for modeling purposes, we quantify the fiber length per cortical volume and the density of varicosities for catecholaminergic, serotonergic and cholinergic systems using immunocytochemical staining and stereological techniques. The data obtained are integrated into a biologically detailed digital reconstruction of the rodent neocortex (Markram et al, 2015) in order to model the influence of modulatory systems on the activity of the somatosensory cortex neocortical column. Simulations of ascending modulation of network activity in our model predict the effects of increasing levels of neuromodulators on diverse neuron types and synapses and reveal a spectrum of activity states. Low levels of neuromodulation drive microcircuit activity into slow oscillations and network synchrony, whereas high neuromodulator concentrations govern fast oscillations and network asynchrony. The models and simulations thus provide a unifying in silico framework to study the role of neuromodulators in reconfiguring network activity.
GED: A flexible family of versatile methods for hypothesis-driven multivariate decompositions
Does that title put you to sleep or pique your interest? The goal of my presentation is to introduce a powerful yet under-utilized mathematical equation that is surprisingly effective at uncovering spatiotemporal patterns that are embedded in data -- but that might be inaccessible in traditional analysis methods due to low SNR or sparse spatial distribution. If you flunked calculus, then don't worry: the math is really easy, and I'll spend most of the time discussing intuition, simulations, and applications in real data. I will also spend some time in the beginning of the talk providing a bird's-eye-view of the empirical research in my lab, which focuses on mesoscale brain dynamics associated with error monitoring and response competition.
Combining two mechanisms to produce neural firing rate homeostasis
The typical goal of homeostatic mechanisms is to ensure a system operates at or in the vicinity of a stable set point, where a particular measure is relatively constant and stable. Neural firing rate homeostasis is unusual in that a set point of fixed firing rate is at odds with the goal of a neuron to convey information, or produce timed motor responses, which require temporal variations in firing rate. Therefore, for a neuron, a range of firing rates is required for optimal function, which could, for example, be set by a dual system that controls both mean and variance of firing rate. We explore, both via simulations and analysis, how two experimentally measured mechanisms for firing rate homeostasis can cooperate to improve information processing and avoid the pitfall of pulling in different directions when their set points do not appear to match.
Capacitance clamp - artificial capacitance in biological neurons via dynamic clamp
A basic time scale in neural dynamics from single cells to the network level is the membrane time constant - set by a neuron’s input resistance and its capacitance. Interestingly, the membrane capacitance appears to be more dynamic than previously assumed with implications for neural function and pathology. Indeed, altered membrane capacitance has been observed in reaction to physiological changes like neural swelling, but also in ageing and Alzheimer's disease. Importantly, according to theory, even small changes of the capacitance can affect neuronal signal processing, e.g. increase network synchronization or facilitate transmission of high frequencies. In experiment, robust methods to modify the capacitance of a neuron have been missing. Here, we present the capacitance clamp - an electrophysiological method for capacitance control based on an unconventional application of the dynamic clamp. In its original form, dynamic clamp mimics additional synaptic or ionic conductances by injecting their respective currents. Whereas a conductance directly governs a current, the membrane capacitance determines how fast the voltage responds to a current. Accordingly, capacitance clamp mimics an altered capacitance by injecting a dynamic current that slows down or speeds up the voltage response (Fig 1 A). For the required dynamic current, the experimenter only has to specify the original cell and the desired target capacitance. In particular, capacitance clamp requires no detailed model of present conductances and thus can be applied in every excitable cell. To validate the capacitance clamp, we performed numerical simulations of the protocol and applied it to modify the capacitance of cultured neurons. First, we simulated capacitance clamp in conductance based neuron models and analysed impedance and firing frequency to verify the altered capacitance. Second, in dentate gyrus granule cells from rats, we could reliably control the capacitance in a range of 75 to 200% of the original capacitance and observed pronounced changes in the shape of the action potentials: increasing the capacitance reduced after-hyperpolarization amplitudes and slowed down repolarization. To conclude, we present a novel tool for electrophysiology: the capacitance clamp provides reliable control over the capacitance of a neuron and thereby opens a new way to study the temporal dynamics of excitable cells.
From 1D to 5D: Data-driven Discovery of Whole-brain Dynamic Connectivity in fMRI Data
The analysis of functional magnetic resonance imaging (fMRI) data can greatly benefit from flexible analytic approaches. In particular, the advent of data-driven approaches to identify whole-brain time-varying connectivity and activity has revealed a number of interesting relevant variation in the data which, when ignored, can provide misleading information. In this lecture I will provide a comparative introduction of a range of data-driven approaches to estimating time-varying connectivity. I will also present detailed examples where studies of both brain health and disorder have been advanced by approaches designed to capture and estimate time-varying information in resting fMRI data. I will review several exemplar data sets analyzed in different ways to demonstrate the complementarity as well as trade-offs of various modeling approaches to answer questions about brain function. Finally, I will review and provide examples of strategies for validating time-varying connectivity including simulations, multimodal imaging, and comparative prediction within clinical populations, among others. As part of the interactive aspect I will provide a hands-on guide to the dynamic functional network connectivity toolbox within the GIFT software, including an online didactic analytic decision tree to introduce the various concepts and decisions that need to be made when using such tools
Energy landscapes, order and disorder, and protein sequence coevolution: From proteins to chromosome structure
In vivo, the human genome folds into a characteristic ensemble of 3D structures. The mechanism driving the folding process remains unknown. A theoretical model for chromatin (the minimal chromatin model) explains the folding of interphase chromosomes and generates chromosome conformations consistent with experimental data is presented. The energy landscape of the model was derived by using the maximum entropy principle and relies on two experimentally derived inputs: a classification of loci into chromatin types and a catalog of the positions of chromatin loops. This model was generalized by utilizing a neural network to infer these chromatin types using epigenetic marks present at a locus, as assayed by ChIP-Seq. The ensemble of structures resulting from these simulations completely agree with HI-C data and exhibits unknotted chromosomes, phase separation of chromatin types, and a tendency for open chromatin to lie at the periphery of chromosome territories. Although this theoretical methodology was trained in one cell line, the human GM12878 lymphoblastoid cells, it has successfully predicted the structural ensembles of multiple human cell lines. Finally, going beyond Hi-C, our predicted structures are also consistent with microscopy measurements. Analysis of both structures from simulation and microscopy reveals that short segments of chromatin make two-state transitions between closed conformations and open dumbbell conformations. For gene active segments, the vast majority of genes appear clustered in the linker region of the chromatin segment, allowing us to speculate possible mechanisms by which chromatin structure and dynamics may be involved in controlling gene expression. * Supported by the NSF
Microorganism locomotion in viscoelastic fluids
Many microorganisms and cells function in complex (non-Newtonian) fluids, which are mixtures of different materials and exhibit both viscous and elastic stresses. For example, mammalian sperm swim through cervical mucus on their journey through the female reproductive tract, and they must penetrate the viscoelastic gel outside the ovum to fertilize. In micro-scale swimming the dynamics emerge from the coupled interactions between the complex rheology of the surrounding media and the passive and active body dynamics of the swimmer. We use computational models of swimmers in viscoelastic fluids to investigate and provide mechanistic explanations for emergent swimming behaviors. I will discuss how flexible filaments (such as flagella) can store energy from a viscoelastic fluid to gain stroke boosts due to fluid elasticity. I will also describe 3D simulations of model organisms such as C. Reinhardtii and mammalian sperm, where we use experimentally measured stroke data to separate naturally coupled stroke and fluid effects. We explore why strokes that are adapted to Newtonian fluid environments might not do well in viscoelastic environments.
Understanding "why": The role of causality in cognition
Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.
Procedural connectivity and other recent advances for efficient spiking neural network simulations
A generative network model of neurodevelopment
The emergence of large-scale brain networks, and their continual refinement, represent crucial developmental processes that can drive individual differences in cognition and which are associated with multiple neurodevelopmental conditions. But how does this organization arise, and what mechanisms govern the diversity of these developmental processes? There are many existing descriptive theories, but to date none are computationally formalized. We provide a mathematical framework that specifies the growth of a brain network over developmental time. Within this framework macroscopic brain organization, complete with spatial embedding of its organization, is an emergent property of a generative wiring equation that optimizes its connectivity by renegotiating its biological costs and topological values continuously over development. The rules that govern these iterative wiring properties are controlled by a set of tightly framed parameters, with subtle differences in these parameters steering network growth towards different neurodiverse outcomes. Regional expression of genes associated with the developmental simulations converge on biological processes and cellular components predominantly involved in synaptic signaling, neuronal projection, catabolic intracellular processes and protein transport. Together, this provides a unifying computational framework for conceptualizing the mechanisms and diversity of childhood brain development, capable of integrating different levels of analysis – from genes to cognition. (Pre-print: https://www.biorxiv.org/content/10.1101/2020.08.13.249391v1)
Mixed active-passive suspensions: from particle entrainment to spontaneous demixing
Understanding the properties of active matter is a challenge which is currently driving a rapid growth in soft- and bio-physics. Some of the most important examples of active matter are at the microscale, and include active colloids and suspensions of microorganisms, both as a simple active fluid (single species) and as mixed suspensions of active and passive elements. In this last class of systems, recent experimental and theoretical work has started to provide a window into new phenomena including activity-induced depletion interactions, phase separation, and the possibility to extract net work from active suspensions. Here I will present our work on a paradigmatic example of mixed active-passive system, where the activity is provided by swimming microalgae. Macro- and micro-scopic experiments reveal that microorganism-colloid interactions are dominated by rare close encounters leading to large displacements through direct entrainment. Simulations and theoretical modelling show that the ensuing particle dynamics can be understood in terms of a simple jump-diffusion process, combining standard diffusion with Poisson-distributed jumps. Entrainment length can be understood within the framework of Taylor dispersion as a competition between advection by the no-slip surface of the cell body and microparticle diffusion. Building on these results, we then ask how external control of the dynamics of the active component (e.g. induced microswimmer anisotropy/inhomogeneity) can be used to alter the transport of passive cargo. As a first step in this direction, we study the behaviour of mixed active-passive systems in confinement. The resulting spatial inhomogeneity in swimmers’ distribution and orientation has a dramatic effect on the spatial distribution of passive particles, with the colloids accumulating either towards the boundaries or towards the bulk of the sample depending on the size of the container. We show that this can be used to induce the system to de-mix spontaneously.
The physics of cement cohesion
Cement is the main binding agent in concrete, literally gluing together rocks and sand into the most-used synthetic material on Earth. However, cement production is responsible for significant amounts of man- made greenhouse gases—in fact if the cement industry were a country, it would be the third largest emitter in the world. Alternatives to the current, environmentally harmful cement production process are not available essentially because the gaps in fundamental understanding hamper the development of smarter and more sustainable solutions. The ultimate challenge is to link the chemical composition of cement grains to the nanoscale physics of the cohesive forces that emerge when mixing cement with water. Cement nanoscale cohesion originates from the electrostatics of ions accumulated in a water-based solution between like-charged surfaces but it is not captured by existing theories because of the nature of the ions involved and the high surface charges. Surprisingly enough, this is also the case for unexplained cohesion in a range of colloidal and biological matter. About one century after the early studies of cement hydration, we have quantitatively solved this notoriously hard problem and discovered how cement cohesion develops during hydration. I will discuss how 3D numerical simulations that feature a simple but molecular description of ions and water, together with an analytical theory that goes beyond the traditional continuum approximations, helped us demonstrate that the optimized interlocking of ion-water structures determine the net cohesive forces and their evolution. These findings open the path to scientifically grounded strategies of material design for cements and have implications for a much wider range of materials and systems where ionic water-based solutions feature both strong Coulombic and confinement effects, ranging from biological membranes to soils. Construction materials are central to our society and to our life as humans on this planet, but usually far removed from fundamental science. We can now start to understand how cement physical-chemistry determines performance, durability and sustainability.
Multitask performance humans and deep neural networks
Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.
The emergence of contrast invariance in cortical circuits
Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.
1. Binding pathway of a proline-rich SH3 partner peptide from simulations and NMR, 2. The Role of LLPS in the Diverse Functions of the Nucleolus
A Connectionist Account of Analogy-Making
Analogy-making is considered to be one of the cognitive processes which are hard to be accounted for in connectionist terms. A number of models have been proposed, but they are either tailed for specific analogical tasks or require complicated mechanisms which don’t fit into the mainstream connectionist modelling paradigm. In this talk I will present a new connectionist account of analogy-making based on the vector approach to representing symbols (VARS). This approach allows representing relational structures of varying complexity by numeric vectors with fixed dimensionality. I will also present a simple and computationally efficient mechanism of aligning VARS representations, which integrates both semantic similarity and structural constraints. The results of a series of simulations will demonstrate that VARS can account for basic analogical phenomena.
Transport and dispersion of active particles in complex porous media
Understanding the transport of microorganisms and self-propelled particles in porous media has important consequences in human health as well as for microbial ecology. In this work, we explore models for the dispersion of active particles in both periodic and random porous media. In a first problem, we analyze the long-time transport properties in a dilute system of active Brownian particles swimming in a periodic lattice in the presence of an external flow. Using generalized Taylor dispersion theory, we calculate the mean transport velocity and dispersion dyadic and explain their dependence on flow strength, swimming activity and geometry. In a second approach, we address the case of run-and-tumble particles swimming through unstructured porous media composed of randomly distributed circular pillars. There, we show that the long-time dispersion is described by a universal hindrance function that depends on the medium porosity and ratio of the swimmer run length to the pillar size. An asymptotic expression for the hindrance function is derived in dilute media, and its extension to semi-dilute and dense media is obtained using stochastic simulations. We conclude by discussing the role of hydrodynamic interactions and swimmer concentration effects.
“Models for Liquid-liquid Phase Separation of Intrinsically Disordered Proteins”
Intrinsically disordered proteins (IDPs), lack of a well-defined folded structure, have been recently shown to be critical to forming membrane-less organelles via liquid-liquid phase separation (LLPS). Due to the flexible conformations of IDPs, it could be challenging to investigate IDPs with solely experimental techniques. Computational models can therefore provide complementary views at several aspects, including the fundamental physics underlying LLPS and the sequence determinants contributing to LLPS. In this presentation, I will start with our coarse-grained computational framework that can help generate sequence dependent phase diagrams. The coarse-grained model further led to the development of a polymer model with empirical parameters to quickly predict LLPS of IDPs. At last, I will show our preliminary efforts on addressing molecular interactions within LLPS of IDPs using all-atom explicit-solvent simulations.
Towards multipurpose biophysics-based mathematical models of cortical circuits
Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.
Adhering, wrapping, and bursting of lipid bilayer membranes: understanding effects of membrane-binding particles and polymers
Proteins and membranes form remarkably complex structures that are key to intracellular compartmentalization, cargo transport, and cell morphology. Despite this wealth of examples in living systems, we still lack design principles for controlling membrane morphology in synthetic systems. With experiments and simulations, we show that even the simple case of spherical or rod-shaped nanoparticles binding to lipid-bilayer membrane vesicles results in a remarkably rich set of morphologies that can be reliably controlled via the particle binding energy. When the binding energy is weak relative to a characteristic membrane-bending energy, vesicles adhere to one another and form a soft solid gel, which is a useful platform for controlled release. With larger binding energy, a transition from partial to complete wrapping of the nanoparticles causes a remarkable vesicle destruction process culminating in rupture, nanoparticle-membrane tubules, and vesicle inversion. We have explored the behavior across a wide range of parameter space. These findings help unify the wide range of effects observed when vesicles or cells are exposed to nanoparticles. They also show how they open the door to a new class of vesicle-based, closed-cell gels that are more than 99% water and can encapsulate and release on demand. I will discuss how triggering membrane remodeling could lead to shape-responsive systems in the future.
“Unraveling Protein's Structural Dynamics: from Configurational Dynamics to Ensemble Switching Guides Functional Mesoscale Assemblies”
Evidence regarding protein structure and function manifest the imperative role that dynamics play in proteins, underlining reconsideration of the unanimated sequence-to-structure-to-function paradigm. Structural dynamics portray a heterogeneous energy landscape described by conformational ensembles where each structural representation can be responsible for unique functions or enable macromolecular assemblies. Using the human p27/Cdk2/Cyclin A ternary complex as an example, we highlight the vital role of intra- and intermolecular dynamics for target recognition, binding, and inhibition as a critical modulator of cell division. Rapidly sampling configurations is critical for the population of different conformational ensembles encoding functional roles. To garner this knowledge, we present how the integration of (sub)ensemble and single-molecule fluorescence spectroscopy with molecular dynamic simulations can characterize structural dynamics linking the heterogeneous ensembles to function. The incorporation of dynamics into the sequence-to-structure-to-function paradigm promises to assist in tackling various challenges, including understanding the formation and regulation of mesoscale assemblies inside cells.
Fast and deep neuromorphic learning with time-to-first-spike coding
Engineered pattern-recognition systems strive for short time-to-solution and low energy-to-solution characteristics. This represents one of the main driving forces behind the development of neuromorphic devices. For both them and their biological archetypes, this corresponds to using as few spikes as early as possible. The concept of few and early spikes is used as the founding principle in the time-to-first-spike coding scheme. Within this framework, we have developed a spike-timing-based learning algorithm, which we used to train neuronal networks on the mixed-signal neuromorphic platform BrainScaleS-2. We derive, from first principles, error-backpropagation-based learning in networks of leaky integrate-and-fire (LIF) neurons relying only on spike times, for specific configurations of neuronal and synaptic time constants. We explicitly examine applicability to neuromorphic substrates by studying the effects of reduced weight precision and range, as well as of parameter noise. We demonstrate the feasibility of our approach on continuous and discrete data spaces, both in software simulations and on BrainScaleS-2. This narrows the gap between previous models of first-spike-time learning and biological neuronal dynamics and paves the way for fast and energy-efficient neuromorphic applications.
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well-established that it depends on pre and postsynaptic activity. However, models that rely solely on pre and postsynaptic activity for synaptic changes have, to date, not been able to account for learning complex tasks that demand hierarchical networks. Here, we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then neurons higher in the hierarchy can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites, and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses, and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.
Swimming in the third domain: archaeal extremophiles
Archaea have evolved to survive in some of the most extreme environments on earth. Life in extreme, nutrient-poor conditions gives the opportunity to probe fundamental energy limitations on movement and response to stimuli, two essential markers of living systems. Here we use three-dimensional holographic microscopy and computer simulations to show that halophilic archaea achieve chemotaxis with power requirements one hundred-fold lower than common eubacterial model systems. Their swimming direction is stabilised by their flagella (archaella), enhancing directional persistence in a manner similar to that displayed by eubacteria, albeit with a different motility apparatus. Our experiments and simulations reveal that the cells are capable of slow but deterministic chemotaxis up a chemical gradient, in a biased random walk at the thermodynamic limit.
Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus
The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates. " "The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown." "Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.
Neural Engineering: Building large-scale cognitive models of the brain
The Neural Engineering Framework has been used to create a wide variety of biologically realistic brain simulations that are capable of performing simple cognitive tasks (remembering a list, counting, etc.). This includes the largest existing functional brain model. This talk will describe this method, and show some examples of using it to take high-level cognitive algorithms and convert them into a neural network that implements those algorithms. Overall, this approach gives us new ways of thinking about how the brain works and what sorts of algorithms it is capable of performing.
Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits
Synaptic plasticity is believed to be a key physiological mechanism for learning. It is well-established that it depends on pre and postsynaptic activity. However, models that rely solely on pre and postsynaptic activity for synaptic changes have, to date, not been able to account for learning complex tasks that demand hierarchical networks. Here, we show that if synaptic plasticity is regulated by high-frequency bursts of spikes, then neurons higher in the hierarchy can coordinate the plasticity of lower-level connections. Using simulations and mathematical analyses, we demonstrate that, when paired with short-term synaptic dynamics, regenerative activity in the apical dendrites, and synaptic plasticity in feedback pathways, a burst-dependent learning rule can solve challenging tasks that require deep network architectures. Our results demonstrate that well-known properties of dendrites, synapses, and synaptic plasticity are sufficient to enable sophisticated learning in hierarchical circuits.
Accelerating bio-plausible spiking simulations on the Graphcore IPU
Bernstein Conference 2024
A connectome manipulation framework for the systematic and reproducible study of structure-function relationships through simulations
Bernstein Conference 2024
Enhanced simulations of whole-brain dynamics using hybrid resting-state structural connectomes
Bernstein Conference 2024
OpenEyeSim 2.1: Rendering Depth-of-Field and Chromatic Aberration Faster than Real-Time Simulations of Visual Accommodation
Bernstein Conference 2024
Single-cell morphological data provide refined simulations of resting-state
Bernstein Conference 2024
Tracking the provenance of data generation and analysis in NEST simulations
Bernstein Conference 2024
Connectome simulations reveal a putative central pattern generator microcircuit for fly walking
COSYNE 2025
Functional connectivity constrained simulations of visuomotor circuits in zebrafish
COSYNE 2025
Computation with neuronal cultures: Effects of connectivity modularity on response separation and generalisation in simulations and experiments
FENS Forum 2024
A connectome manipulation framework for the systematic and reproducible study of structure-function relationships through simulations
FENS Forum 2024
Estimation of neuronal biophysical parameters in the presence of experimental noise using computer simulations and probabilistic inference methods
FENS Forum 2024
Evaluating the spread of excitation with different types of optogenetic cochlear stimulation through computer simulations and in vivo electrophysiology
FENS Forum 2024
Exploiting network topology in brain-scale multi-area model simulations
FENS Forum 2024
Eyes on the future: Unveiling mental simulations as a deliberative decision-making mechanism
FENS Forum 2024
A novel technique for dramatically reducing computational burden in electrophysiological axon simulations
FENS Forum 2024
Neural simulations in the Brian ecosystem
Neuromatch 5