Simulation
simulation
N/A
The Institute of Robotics and Cognitive Systems at the University of Lübeck has a vacancy for an Assistant Professorship (Juniorprofessur) Tenure Track W2 for Robotics for an initial period of three years with an option to extend for a further three years. The future holder of the position should represent the field of robotics in research and teaching. Furthermore, the holder of the professorship shall establish their own working group at the Institute of Robotics and Cognitive Systems. The future holder of the position should have a very good doctorate and demonstrable scientific experience in one or more of the following research areas: Modelling, simulation, and control of robots, Robot kinematics and dynamics, Robot sensor technology, e.g., force and moment sensor technology, Robotic systems, e.g., telerobotic systems, humanoid robots, etc., Soft robotics and continuum robotics, AI and machine learning methods in robotics, Human-robot collaboration and safe autonomous robot systems, AR/VR in robotics, Applications of AI and robotics in medicine. The range of tasks also includes the acquisition of third-party funds and the assumption of project management. The applicant is expected to be scientifically involved in the research focus areas of the institute and the profile areas of the university, especially in the context of projects acquired by the institute itself (public funding, industrial cooperations, etc.). The position holder is expected to be willing to cooperate with the “Lübeck Innovation Hub for Robotic Surgery” (LIROS), the 'Center for Doctoral Studies Lübeck' and the 'Open Lab for Robotics and Imaging in Industry and Medicine' (OLRIM). In teaching, participation in the degree programme 'Robotics and Autonomous Systems' (German-language Bachelor’s, English-language Master’s) as well as the other degree programmes of the university’s STEM sections is expected.
Prof. Sacha Jennifer van Albada
PhD and postdoc opportunities with a focus on the simulation of large-scale biological neural networks are available in the Theoretical Neuroanatomy group at Jülich Research Center, Germany. The projects will advance a research program that centers on the full-scale simulation of thalamocortical networks using the simulator NEST. The postdoc position is available in the context of the Henriette Herz Scouting Program of the Humboldt Foundation, and will be offered to a female candidate. The program is particularly aimed at candidates from countries underrepresented in the Humboldt Foundation. We will jointly define a research project and the selected candidate will receive a Humboldt Research Fellowship. The position is available for 24 months for postdocs up to 4 years after the PhD defense and for 18 months for experienced researchers 4-12 years after the PhD defense. The PhD defense should not be more than 12 years ago, and candidates should not have previous or existing links to Germany in terms of study, research stays, or citizenship. Due consideration will be given to any gaps in the CV due to family care or other personal circumstances. The PhD position is open to candidates regardless of gender. The candidate should have a background in physics, mathematics, computer science, biology (or specifically neuroscience), or engineering. Excellent quantitative and analytical skills are highly valued. We offer a structured program guiding doctoral researchers through the PhD work and plenty of opportunities for local and international collaboration. The researchers will be embedded in a vibrant research institute and have links to the University of Cologne, so that candidates can gain teaching/tutoring experience.
N/A
We are looking for a motivated research assistant / engineer (“ingénieur d’étude” – IE) with expertise in neuromorphic engineering to join the team of Drs. Timothée Levi, Fabien Wagner, and Amélie Aussel at the University of Bordeaux (Institut du Matériau au Système – IMS – and Institut des Maladies Neurodégénératives – IMN). The goal of the project is to expand our current efforts towards performing large-scale simulations of conductance-based neuronal models on FPGAs, with an application to neurostimulation of the hippocampal formation. The initial contract would be for a period of 1 year with an expected starting date on Oct 1st, 2024.
Scenix
At SceniX, we’re building a “game engine” for rapid development and deployment of robotic learning systems. That is, a world model that captures object geometry, appearance, and dynamics with an eye toward facilitating robotic training and evaluation. Existing AI systems can generate pixels, but often lack a robust understanding of physics and object interactions. We’re on a mission to advance the science of 3D generative models so they can handle the real-world complexity of how objects look, move and interact. SceniX was founded By William O'Farrell (SpeechWorks, Body Labs), Changxi Zheng (Columbia professor), Yunzhu Li (Columbia professor) and Sonny Hu (Body Labs, Amazon). They combine a unique background of serial entrepreneurship, computer graphics, physical simulation, and robotics. In-person role in New York preferred. Hybrid or remote considered.
AutoMIND: Deep inverse models for revealing neural circuit invariances
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
Time perception in film viewing as a function of film editing
Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.
Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making
How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.
Mathematical and computational modelling of ocular hemodynamics: from theory to applications
Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.
Movement planning as a window into hierarchical motor control
The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.
Euclidean coordinates are the wrong prior for primate vision
The mapping from the visual field to V1 can be approximated by a log-polar transform. In this domain, scale is a left-right shift, and rotation is an up-down shift. When fed into a standard shift-invariant convolutional network, this provides scale and rotation invariance. However, translation invariance is lost. In our model, this is compensated for by multiple fixations on an object. Due to the high concentration of cones in the fovea with the dropoff of resolution in the periphery, fully 10 degrees of visual angle take up about half of V1, with the remaining 170 degrees (or so) taking up the other half. This layout provides the basis for the central and peripheral pathways. Simulations with this model closely match human performance in scene classification, and competition between the pathways leads to the peripheral pathway being used for this task. Remarkably, in spite of the property of rotation invariance, this model can explain the inverted face effect. We suggest that the standard method of using image coordinates is the wrong prior for models of primate vision.
Quasicriticality and the quest for a framework of neuronal dynamics
Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.
Relations and Predictions in Brains and Machines
Humans and animals learn and plan with flexibility and efficiency well beyond that of modern Machine Learning methods. This is hypothesized to owe in part to the ability of animals to build structured representations of their environments, and modulate these representations to rapidly adapt to new settings. In the first part of this talk, I will discuss theoretical work describing how learned representations in hippocampus enable rapid adaptation to new goals by learning predictive representations, while entorhinal cortex compresses these predictive representations with spectral methods that support smooth generalization among related states. I will also cover recent work extending this account, in which we show how the predictive model can be adapted to the probabilistic setting to describe a broader array of generalization results in humans and animals, and how entorhinal representations can be modulated to support sample generation optimized for different behavioral states. In the second part of the talk, I will overview some of the ways in which we have combined many of the same mathematical concepts with state-of-the-art deep learning methods to improve efficiency and performance in machine learning applications like physical simulation, relational reasoning, and design.
Crescent Loom: a flexible neurophysiology online simulation for teaching neuroethology
A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures
The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.
Meta-learning functional plasticity rules in neural networks
Synaptic plasticity is known to be a key player in the brain’s life-long learning abilities. However, due to experimental limitations, the nature of the local changes at individual synapses and their link with emerging network-level computations remain unclear. I will present a numerical, meta-learning approach to deduce plasticity rules from either neuronal activity data and/or prior knowledge about the network's computation. I will first show how to recover known rules, given a human-designed loss function in rate networks, or directly from data, using an adversarial approach. Then I will present how to scale-up this approach to recurrent spiking networks using simulation-based inference.
Geometry of concept learning
Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.
Network inference via process motifs for lagged correlation in linear stochastic processes
A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.
Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks
Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
Introducing dendritic computations to SNNs with Dendrify
Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.
Online Training of Spiking Recurrent Neural Networks With Memristive Synapses
Spiking recurrent neural networks (RNNs) are a promising tool for solving a wide variety of complex cognitive and motor tasks, due to their rich temporal dynamics and sparse processing. However training spiking RNNs on dedicated neuromorphic hardware is still an open challenge. This is due mainly to the lack of local, hardware-friendly learning mechanisms that can solve the temporal credit assignment problem and ensure stable network dynamics, even when the weight resolution is limited. These challenges are further accentuated, if one resorts to using memristive devices for in-memory computing to resolve the von-Neumann bottleneck problem, at the expense of a substantial increase in variability in both the computation and the working memory of the spiking RNNs. In this talk, I will present our recent work where we introduced a PyTorch simulation framework of memristive crossbar arrays that enables accurate investigation of such challenges. I will show that recently proposed e-prop learning rule can be used to train spiking RNNs whose weights are emulated in the presented simulation framework. Although e-prop locally approximates the ideal synaptic updates, it is difficult to implement the updates on the memristive substrate due to substantial device non-idealities. I will mention several widely adapted weight update schemes that primarily aim to cope with these device non-idealities and demonstrate that accumulating gradients can enable online and efficient training of spiking RNN on memristive substrates.
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Optimal information loading into working memory in prefrontal cortex
Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal information loading involves inputs that are largely orthogonal, rather than similar, to the persistent activities observed during memory maintenance. Using a novel, theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading and we find that such dynamics emerge naturally as a dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics, and reveals a normative principle underlying the widely observed phenomenon of dynamic coding in PFC.
The Problem of Testimony
The talk will detail work drawing on behavioural results, formal analysis, and computational modelling with agent-based simulations to unpack the scale of the challenge humans face when trying to work out and factor in the reliability of their sources. In particular, it is shown how and why this task admits of no easy solution in the context of wider communication networks, and how this will affect the accuracy of our beliefs. The implications of this for the shift in the size and topology of our communication networks through the uncontrolled rise of social media are discussed.
Non-regular behavior during the coalescence of liquid-like cellular aggregates
The fusion of cell aggregates widely exists during biological processes such as development, tissue regeneration, and tumor invasion. Cellular spheroids (spherical cell aggregates) are commonly used to study this phenomenon. In previous studies, with approximated assumptions and measurements, researchers found that the fusion of two spheroids with some cell type is similar to the coalescence of two liquid droplets. However, with more accurate measurements focusing on the overall shape evolution in this process, we find that even in the previously-regarded liquid-like regime, the fusion process of spheroids can be very different from regular liquid coalescence. We conduct numerical simulations using both standard particulate models and vertex models with both Molecular Dynamics and Brownian Dynamics. The simulation results show that the difference between spheroids and regular liquid droplets is caused by the microscopic overdamped dynamics of each cell rather than the topological cell-cell interactions in the vertex model. Our research reveals the necessity of a new continuum theory for “liquid” with microscopically overdamped components, such as cellular and colloidal systems. Detailed analysis of our simulation results of different system sizes provides the basis for developing the new theory.
Multiscale modeling of brain states, from spiking networks to the whole brain
Modeling brain mechanisms is often confined to a given scale, such as single-cell models, network models or whole-brain models, and it is often difficult to relate these models. Here, we show an approach to build models across scales, starting from the level of circuits to the whole brain. The key is the design of accurate population models derived from biophysical models of networks of excitatory and inhibitory neurons, using mean-field techniques. Such population models can be later integrated as units in large-scale networks defining entire brain areas or the whole brain. We illustrate this approach by the simulation of asynchronous and slow-wave states, from circuits to the whole brain. At the mesoscale (millimeters), these models account for travelling activity waves in cortex, and at the macroscale (centimeters), the models reproduce the synchrony of slow waves and their responsiveness to external stimuli. This approach can also be used to evaluate the impact of sub-cellular parameters, such as receptor types or membrane conductances, on the emergent behavior at the whole-brain level. This is illustrated with simulations of the effect of anesthetics. The program codes are open source and run in open-access platforms (such as EBRAINS).
Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations
To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Cognitive Maps
Ample evidence suggests that the brain generates internal simulations of the outside world to guide our thoughts and actions. These mental representations, or cognitive maps, are thought to be essential for our very comprehension of reality. I will discuss what is known about the informational structure of cognitive maps, their neural underpinnings, and how they relate to behavior, evolution, disease, and the current revolution in artificial intelligence.
NaV Long-term Inactivation Regulates Adaptation in Place Cells and Depolarization Block in Dopamine Neurons
In behaving rodents, CA1 pyramidal neurons receive spatially-tuned depolarizing synaptic input while traversing a specific location within an environment called its place. Midbrain dopamine neurons participate in reinforcement learning, and bursts of action potentials riding a depolarizing wave of synaptic input signal rewards and reward expectation. Interestingly, slice electrophysiology in vitro shows that both types of cells exhibit a pronounced reduction in firing rate (adaptation) and even cessation of firing during sustained depolarization. We included a five state Markov model of NaV1.6 (for CA1) and NaV1.2 (for dopamine neurons) respectively, in computational models of these two types of neurons. Our simulations suggest that long-term inactivation of this channel is responsible for the adaptation in CA1 pyramidal neurons, in response to triangular depolarizing current ramps. We also show that the differential contribution of slow inactivation in two subpopulations of midbrain dopamine neurons can account for their different dynamic ranges, as assessed by their responses to similar depolarizing ramps. These results suggest long-term inactivation of the sodium channel is a general mechanism for adaptation.
An economic decision-making model of anticipated surprise with dynamic expectation
When making decision under risk, people often exhibit behaviours that classical economic theories cannot explain. Newer models that attempt to account for these ‘irrational’ behaviours often lack neuroscience bases and require the introduction of subjective and problem-specific constructs. Here, we present a decision-making model inspired by the prediction error signals and introspective neuronal replay reported in the brain. In the model, decisions are chosen based on ‘anticipated surprise’, defined by a nonlinear average of the differences between individual outcomes and a reference point. The reference point is determined by the expected value of the possible outcomes, which can dynamically change during the mental simulation of decision-making problems involving sequential stages. Our model elucidates the contribution of each stage to the appeal of available options in a decision-making problem. This allows us to explain several economic paradoxes and gambling behaviours. Our work could help bridge the gap between decision-making theories in economics and neurosciences.
NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule
Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.
NMC4 Short Talk: Brain-inspired spiking neural network controller for a neurorobotic whisker system
It is common for animals to use self-generated movements to actively sense the surrounding environment. For instance, rodents rhythmically move their whiskers to explore the space close to their body. The mouse whisker system has become a standard model to study active sensing and sensorimotor integration through feedback loops. In this work, we developed a bioinspired spiking neural network model of the sensorimotor peripheral whisker system, modelling trigeminal ganglion, trigeminal nuclei, facial nuclei, and central pattern generator neuronal populations. This network was embedded in a virtual mouse robot, exploiting the Neurorobotics Platform, a simulation platform offering a virtual environment to develop and test robots driven by brain-inspired controllers. Eventually, the peripheral whisker system was properly connected to an adaptive cerebellar network controller. The whole system was able to drive active whisking with learning capability, matching neural correlates of behaviour experimentally recorded in mice.
NMC4 Short Talk: A mechanism for inter-areal coherence through communication based on connectivity and oscillatory power
Inter-areal coherence between cortical field-potentials is a widespread phenomenon and depends on numerous behavioral and cognitive factors. It has been hypothesized that inter-areal coherence reflects phase-synchronization between local oscillations and flexibly gates communication. We reveal an alternative mechanism, where coherence results from and is not the cause of communication, and naturally emerges as a consequence of the fact that spiking activity in a sending area causes post-synaptic inputs both in the same area and in other areas. Consequently, coherence depends in a lawful manner on oscillatory power and phase-locking in a sending area and inter-areal connectivity. We show that changes in oscillatory power explain prominent changes in fronto-parietal beta-coherence with movement and memory, and LGN-V1 gamma-coherence with arousal and visual stimulation. Optogenetic silencing of a receiving area and E/I network simulations demonstrate that afferent synaptic inputs rather than spiking entrainment are the main determinant of inter-areal coherence. These findings suggest that the unique spectral profiles of different brain areas automatically give rise to large-scale inter-areal coherence patterns that follow anatomical connectivity and continuously reconfigure as a function of behavior and cognition.
The wonders and complexities of brain microstructure: Enabling biomedical engineering studies combining imaging and models
Brain microstructure plays a key role in driving the transport of drug molecules directly administered to the brain tissue as in Convection-Enhanced Delivery procedures. This study reports the first systematic attempt to characterize the cytoarchitecture of commissural, long association and projection fiber, namely: the corpus callosum, the fornix and the corona radiata. Ovine samples from three different subjects have been imaged using scanning electron microscope combined with focused ion beam milling. Particular focus has been given to the axons. For each tract, a 3D reconstruction of relatively large volumes (including a significant number of axons) has been performed. Namely, outer axonal ellipticity, outer axonal cross-sectional area and its relative perimeter have been measured. This study [1] provides useful insight into the fibrous organization of the tissue that can be described as composite material presenting elliptical tortuous tubular fibers, leading to a workflow to enable accurate simulations of drug delivery which include well-resolved microstructural features. As a demonstration of the use of these imaging and reconstruction techniques, our research analyses the hydraulic permeability of two white matter (WM) areas (corpus callosum and fornix) whose three-dimensional microstructure was reconstructed starting from the acquisition of the electron microscopy images. Considering that the white matter structure is mainly composed of elongated and parallel axons we computed the permeability along the parallel and perpendicular directions using computational fluid dynamics [2]. The results show a statistically significant difference between parallel and perpendicular permeability, with a ratio about 2 in both the white matter structures analysed, thus demonstrating their anisotropic behaviour. This is in line with the experimental results obtained using perfusion of brain matter [3]. Moreover, we find a significant difference between permeability in corpus callosum and fornix, which suggests that also the white matter heterogeneity should be considered when modelling drug transport in the brain. Our findings, that demonstrate and quantify the anisotropic and heterogeneous character of the white matter, represent a fundamental contribution not only for drug delivery modelling but also for shedding light on the interstitial transport mechanisms in the extracellular space. These and many other discoveries will be discussed during the talk." "1. https://www.researchsquare.com/article/rs-686577/v1, 2. https://www.pnas.org/content/118/36/e2105328118, 3. https://ieeexplore.ieee.org/abstract/document/9198110
Networking—the key to success… especially in the brain
In our everyday lives, we form connections and build up social networks that allow us to function successfully as individuals and as a society. Our social networks tend to include well-connected individuals who link us to other groups of people that we might otherwise have limited access to. In addition, we are more likely to befriend individuals who a) live nearby and b) have mutual friends. Interestingly, neurons tend to do the same…until development is perturbed. Just like social networks, neuronal networks require highly connected hubs to elicit efficient communication at minimal cost (you can’t befriend everybody you meet, nor can every neuron wire with every other!). This talk will cover some of Alex’s work showing that microscopic (cellular scale) brain networks inferred from spontaneous activity show similar complex topology to that previously described in macroscopic human brain scans. The talk will also discuss what happens when neurodevelopment is disrupted in the case of a monogenic disorder called Rett Syndrome. This will include simulations of neuronal activity and the effects of manipulation of model parameters as well as what happens when we manipulate real developing networks using optogenetics. If functional development can be restored in atypical networks, this may have implications for treatment of neurodevelopmental disorders like Rett Syndrome.
Understanding the Invisibility of Scotomas: Novel Simulations
Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models
The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.
Deriving local synaptic learning rules for efficient representations in networks of spiking neurons
How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under specific requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs. It also works in the presence of inhibitory transmission delays, where Hebbian-like plasticity typically fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.
Neuropunk revolution and its implementation via real-time neurosimulations and their integrations
In this talk I present the perspectives of the "neuropunk revolution'' technologies. One could understand the "neuropunk revolution'' as the integration of real-time neurosimulations into biological nervous/motor systems via neurostimulation or artificial robotic systems via integration with actuators. I see the added value of the real-time neurosimulations as bridge technology for the set of developed technologies: BCI, neuroprosthetics, AI, robotics to provide bio-compatible integration into biological or artificial limbs. Here I present the three types of integration of the "neuropunk revolution'' technologies as inbound, outbound and closed-loop in-outbound systems. I see the shift of the perspective of how we see now the set of technologies including AI, BCI, neuroprosthetics and robotics due to the proposed concept for example the integration of external to a body simulated part of the nervous system back into the biological nervous system or muscles.
Beyond the binding problem: From basic affordances to symbolic thought
Human cognitive abilities seem qualitatively different from the cognitive abilities of other primates, a difference Penn, Holyoak, and Povinelli (2008) attribute to role-based relational reasoning—inferences and generalizations based on the relational roles to which objects (and other relations) are bound, rather than just the features of the objects themselves. Role-based relational reasoning depends on the ability to dynamically bind arguments to relational roles. But dynamic binding cannot be sufficient for relational thinking: Some non-human animals solve the dynamic binding problem, at least in some domains; and many non-human species generalize affordances to completely novel objects and scenes, a kind of universal generalization that likely depends on dynamic binding. If they can solve the dynamic binding problem, then why can they not reason about relations? What are they missing? I will present simulations with the LISA model of analogical reasoning (Hummel & Holyoak, 1997, 2003) suggesting that the missing pieces are multi-role integration (the capacity to combine multiple role bindings into complete relations) and structure mapping (the capacity to map different systems of role bindings onto one another). When LISA is deprived of either of these capacities, it can still generalize affordances universally, but it cannot reason symbolically; granted both abilities, LISA enjoys the full power of relational (symbolic) thought. I speculate that one reason it may have taken relational reasoning so long to evolve is that it required evolution to solve both problems simultaneously, since neither multi-role integration nor structure mapping appears to confer any adaptive advantage over simple role binding on its own.
How polymer-loop-extruding motors shape chromosomes
Chromosomes are extremely long, active polymers that are spatially organized across multiple scales to promote cellular functions, such as gene transcription and genetic inheritance. During each cell cycle, chromosomes are dramatically compacted as cells divide and dynamically reorganized into less compact, spatiotemporally patterned structures after cell division. These activities are facilitated by DNA/chromatin-binding protein motors called SMC complexes. Each of these motors can perform a unique activity known as “loop extrusion,” in which the motor binds the DNA/chromatin polymer, reels in the polymer fiber, and extrudes it as a loop. Using simulations and theory, I show how loop-extruding motors can collectively compact and spatially organize chromosomes in different scenarios. First, I show that loop-extruding complexes can generate sufficient compaction for cell division, provided that loop-extrusion satisfies stringent physical requirements. Second, while loop-extrusion alone does not uniquely spatially pattern the genome, interactions between SMC complexes and protein “boundary elements” can generate patterns that emerge in the genome after cell division. Intriguingly, these “boundary elements” are not necessarily stationary, which can generate a variety of patterns in the neighborhood of transcriptionally active genes. These predictions, along with supporting experiments, show how SMC complexes and other molecular machinery, such as RNA polymerase, can spatially organize the genome. More generally, this work demonstrates both the versatility of the loop extrusion mechanism for chromosome functional organization and how seemingly subtle microscopic effects can emerge in the spatiotemporal structure of nonequilibrium polymers.
Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software
Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.
Do leader cells drive collective behavior in Dictyostelium Discoideum amoeba colonies?
Dictyostelium Discoideum (DD) are a fascinating single-cellular organism. When nutrients are plentiful, the DD cells act as autonomous individuals foraging their local vicinity. At the onset of starvation, a few (<0.1%) cells begin communicating with others by emitting a spike in the chemoattractant protein cyclic-AMP. Nearby cells sense the chemical gradient and respond by moving toward it and emitting a cyclic-AMP spike of their own. Cyclic-AMP activity increases over time, and eventually a spiral wave emerges, attracting hundreds of thousands of cells to an aggregation center. How DD cells go from autonomous individuals to a collective entity remains an open question for more than 60 years--a question whose answer would shed light on the emergence of multi-cellular life. Recently, trans-scale imaging has allowed the ability to sense the cyclic-AMP activity at both cell and colony levels. Using both the images as well as toy simulation models, this research aims to clarify whether the activity at the colony level is in fact initiated by a few cells, which may be deemed "leader" or "pacemaker" cells. In this talk, I will demonstrate the use of information-theoretic techniques to classify leaders and followers based on trajectory data, as well as to infer the domain of interaction of leader cells. We validate the techniques on toy models where leaders and followers are known, and then try to answer the question in real data--do leader cells drive collective behavior in DD colonies?
Coordinated motion of active filaments on spherical surfaces
Filaments (slender, microscopic elastic bodies) are prevalent in biological and industrial settings. In the biological case, the filaments are often active, in that they are driven internally by motor proteins, with the prime examples being cilia and flagella. For cilia in particular, which can appear in dense arrays, their resulting motions are coupled through the surrounding fluid, as well as through surfaces to which they are attached. In this talk, I present numerical simulations exploring the coordinated motion of active filaments and how it depends on the driving force, density of filaments, as well as the attached surface. In particular, we find that when the surface is spherical, its topology introduces local defects in coordinated motion which can then feedback and alter the global state. This is particularly true when the surface is not held fixed and is free to move in the surrounding fluid. These simulations take advantage of a computational framework we developed for fully 3D filament motion that combines unit quaternions, implicit geometric time integration, quasi-Newton methods, and fast, matrix-free methods for hydrodynamic interactions and it will also be presented.
Memory for Latent Representations: An Account of Working Memory that Builds on Visual Knowledge for Efficient and Detailed Visual Representations
Visual knowledge obtained from our lifelong experience of the world plays a critical role in our ability to build short-term memories. We propose a mechanistic explanation of how working memory (WM) representations are built from the latent representations of visual knowledge and can then be reconstructed. The proposed model, Memory for Latent Representations (MLR), features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that binds items’ attributes to tokenized representations. The simulation results revealed that shape information for stimuli that the model was trained on, can be encoded and retrieved efficiently from latents in higher levels of the visual hierarchy. On the other hand, novel patterns that are completely outside the training set can be stored from a single exposure using only latents from early layers of the visual system. Moreover, the representation of a given stimulus can have multiple codes, representing specific visual features such as shape or color, in addition to categorical information. Finally, we validated our model by testing a series of predictions against behavioral results acquired from WM tasks. The model provides a compelling demonstration of visual knowledge yielding the formation of compact visual representation for efficient memory encoding.
Digitization as a driving force for collaboration in neuroscience
Many of the collaborations we encounter in our scientific careers are centered on a common idea that can be associated with certain resources, such as a dataset, an algorithm, or a model. All partners in a collaboration need to develop a common understanding of these resources, and need to be able to access them in a simple and unambiguous manner in order to avoid incorrect conclusions especially in highly cross-disciplinary contexts. While digital computers have entered to assist scientific workflows in experiment and simulation for many decades, the high degree of heterogeneity in the field had led to a scattered landscape of highly customized, lab-internal solutions to organizing and managing the resources on a project-by-project basis. Only with the availability of modern technologies such as the semantic web, platforms for collaborative coding or the development of data standards overarching different disciplines, we have tools at our disposal to make resources increasingly more accessible, understandable, and usable. However, without overarching standardization efforts and adaptation of such technologies to the workflows and needs of individual researchers, their adoption by the neuroscience community will be impeded. From the perspective of computational neuroscience, which is inherently dependent on leveraging data and methods across the field of neuroscience for inspiration and validation, I will outline my view on past and present developments towards a more rigorous use of digital resources and how they improved collaboration, and introduce emerging initiatives to support this process in the future (e.g., EBRAINS http://ebrains.eu, NFDI-Neuro http://www.nfdi-neuro.de).
An in-silico framework to study the cholinergic modulation of the neocortex
Neuromodulators control information processing in cortical microcircuits by regulating the cellular and synaptic physiology of neurons. Computational models and detailed simulations of neocortical microcircuitry offer a unifying framework to analyze the role of neuromodulators on network activity. In the present study, to get a deeper insight in the organization of the cortical neuropil for modeling purposes, we quantify the fiber length per cortical volume and the density of varicosities for catecholaminergic, serotonergic and cholinergic systems using immunocytochemical staining and stereological techniques. The data obtained are integrated into a biologically detailed digital reconstruction of the rodent neocortex (Markram et al, 2015) in order to model the influence of modulatory systems on the activity of the somatosensory cortex neocortical column. Simulations of ascending modulation of network activity in our model predict the effects of increasing levels of neuromodulators on diverse neuron types and synapses and reveal a spectrum of activity states. Low levels of neuromodulation drive microcircuit activity into slow oscillations and network synchrony, whereas high neuromodulator concentrations govern fast oscillations and network asynchrony. The models and simulations thus provide a unifying in silico framework to study the role of neuromodulators in reconfiguring network activity.
GED: A flexible family of versatile methods for hypothesis-driven multivariate decompositions
Does that title put you to sleep or pique your interest? The goal of my presentation is to introduce a powerful yet under-utilized mathematical equation that is surprisingly effective at uncovering spatiotemporal patterns that are embedded in data -- but that might be inaccessible in traditional analysis methods due to low SNR or sparse spatial distribution. If you flunked calculus, then don't worry: the math is really easy, and I'll spend most of the time discussing intuition, simulations, and applications in real data. I will also spend some time in the beginning of the talk providing a bird's-eye-view of the empirical research in my lab, which focuses on mesoscale brain dynamics associated with error monitoring and response competition.
Combining two mechanisms to produce neural firing rate homeostasis
The typical goal of homeostatic mechanisms is to ensure a system operates at or in the vicinity of a stable set point, where a particular measure is relatively constant and stable. Neural firing rate homeostasis is unusual in that a set point of fixed firing rate is at odds with the goal of a neuron to convey information, or produce timed motor responses, which require temporal variations in firing rate. Therefore, for a neuron, a range of firing rates is required for optimal function, which could, for example, be set by a dual system that controls both mean and variance of firing rate. We explore, both via simulations and analysis, how two experimentally measured mechanisms for firing rate homeostasis can cooperate to improve information processing and avoid the pitfall of pulling in different directions when their set points do not appear to match.
Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses
There is little consensus on the level of spatial complexity at which dendrites operate. On the one hand, emergent evidence indicates that synapses cluster at micrometer spatial scales. On the other hand, most modelling and network studies ignore dendrites altogether. This dichotomy raises an urgent question: what is the smallest relevant spatial scale for understanding dendritic computation? We have developed a method to construct compartmental models at any level of spatial complexity. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models. Thus, we are able to systematically construct passive as well as active dendrite models at varying degrees of spatial complexity. We evaluate which elements of the dendritic computational repertoire are captured by these models. We show that many canonical elements of the dendritic computational repertoire can be reproduced with few compartments. For instance, for a model to behave as a two-layer network, it is sufficient to fit a reduced model at the soma and at locations at the dendritic tips. In the basal dendrites of an L2/3 pyramidal model, we reproduce the backpropagation of somatic action potentials (APs) with a single dendritic compartment at the tip. Further, we obtain the well-known Ca-spike coincidence detection mechanism in L5 Pyramidal cells with as few as eleven compartments, the requirement being that their spacing along the apical trunk supports AP backpropagation. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Consequently, when the average conductance load on distal synapses is constant, the dendritic tree can be simplified while appropriately decreasing synaptic weights. When the conductance level fluctuates strongly, for instance through a-priori unpredictable fluctuations in NMDA activation, a constant weight rescale factor cannot be found, and the dendrite cannot be simplified. We have created an open source Python toolbox (NEAT - https://neatdend.readthedocs.io/en/latest/) that automatises the simplification process. A NEST implementation of the reduced models, currently under construction, will enable the simulation of few-compartment models in large-scale networks, thus bridging the gap between cellular and network level neuroscience.
Capacitance clamp - artificial capacitance in biological neurons via dynamic clamp
A basic time scale in neural dynamics from single cells to the network level is the membrane time constant - set by a neuron’s input resistance and its capacitance. Interestingly, the membrane capacitance appears to be more dynamic than previously assumed with implications for neural function and pathology. Indeed, altered membrane capacitance has been observed in reaction to physiological changes like neural swelling, but also in ageing and Alzheimer's disease. Importantly, according to theory, even small changes of the capacitance can affect neuronal signal processing, e.g. increase network synchronization or facilitate transmission of high frequencies. In experiment, robust methods to modify the capacitance of a neuron have been missing. Here, we present the capacitance clamp - an electrophysiological method for capacitance control based on an unconventional application of the dynamic clamp. In its original form, dynamic clamp mimics additional synaptic or ionic conductances by injecting their respective currents. Whereas a conductance directly governs a current, the membrane capacitance determines how fast the voltage responds to a current. Accordingly, capacitance clamp mimics an altered capacitance by injecting a dynamic current that slows down or speeds up the voltage response (Fig 1 A). For the required dynamic current, the experimenter only has to specify the original cell and the desired target capacitance. In particular, capacitance clamp requires no detailed model of present conductances and thus can be applied in every excitable cell. To validate the capacitance clamp, we performed numerical simulations of the protocol and applied it to modify the capacitance of cultured neurons. First, we simulated capacitance clamp in conductance based neuron models and analysed impedance and firing frequency to verify the altered capacitance. Second, in dentate gyrus granule cells from rats, we could reliably control the capacitance in a range of 75 to 200% of the original capacitance and observed pronounced changes in the shape of the action potentials: increasing the capacitance reduced after-hyperpolarization amplitudes and slowed down repolarization. To conclude, we present a novel tool for electrophysiology: the capacitance clamp provides reliable control over the capacitance of a neuron and thereby opens a new way to study the temporal dynamics of excitable cells.
From 1D to 5D: Data-driven Discovery of Whole-brain Dynamic Connectivity in fMRI Data
The analysis of functional magnetic resonance imaging (fMRI) data can greatly benefit from flexible analytic approaches. In particular, the advent of data-driven approaches to identify whole-brain time-varying connectivity and activity has revealed a number of interesting relevant variation in the data which, when ignored, can provide misleading information. In this lecture I will provide a comparative introduction of a range of data-driven approaches to estimating time-varying connectivity. I will also present detailed examples where studies of both brain health and disorder have been advanced by approaches designed to capture and estimate time-varying information in resting fMRI data. I will review several exemplar data sets analyzed in different ways to demonstrate the complementarity as well as trade-offs of various modeling approaches to answer questions about brain function. Finally, I will review and provide examples of strategies for validating time-varying connectivity including simulations, multimodal imaging, and comparative prediction within clinical populations, among others. As part of the interactive aspect I will provide a hands-on guide to the dynamic functional network connectivity toolbox within the GIFT software, including an online didactic analytic decision tree to introduce the various concepts and decisions that need to be made when using such tools
Energy landscapes, order and disorder, and protein sequence coevolution: From proteins to chromosome structure
In vivo, the human genome folds into a characteristic ensemble of 3D structures. The mechanism driving the folding process remains unknown. A theoretical model for chromatin (the minimal chromatin model) explains the folding of interphase chromosomes and generates chromosome conformations consistent with experimental data is presented. The energy landscape of the model was derived by using the maximum entropy principle and relies on two experimentally derived inputs: a classification of loci into chromatin types and a catalog of the positions of chromatin loops. This model was generalized by utilizing a neural network to infer these chromatin types using epigenetic marks present at a locus, as assayed by ChIP-Seq. The ensemble of structures resulting from these simulations completely agree with HI-C data and exhibits unknotted chromosomes, phase separation of chromatin types, and a tendency for open chromatin to lie at the periphery of chromosome territories. Although this theoretical methodology was trained in one cell line, the human GM12878 lymphoblastoid cells, it has successfully predicted the structural ensembles of multiple human cell lines. Finally, going beyond Hi-C, our predicted structures are also consistent with microscopy measurements. Analysis of both structures from simulation and microscopy reveals that short segments of chromatin make two-state transitions between closed conformations and open dumbbell conformations. For gene active segments, the vast majority of genes appear clustered in the linker region of the chromatin segment, allowing us to speculate possible mechanisms by which chromatin structure and dynamics may be involved in controlling gene expression. * Supported by the NSF
Microorganism locomotion in viscoelastic fluids
Many microorganisms and cells function in complex (non-Newtonian) fluids, which are mixtures of different materials and exhibit both viscous and elastic stresses. For example, mammalian sperm swim through cervical mucus on their journey through the female reproductive tract, and they must penetrate the viscoelastic gel outside the ovum to fertilize. In micro-scale swimming the dynamics emerge from the coupled interactions between the complex rheology of the surrounding media and the passive and active body dynamics of the swimmer. We use computational models of swimmers in viscoelastic fluids to investigate and provide mechanistic explanations for emergent swimming behaviors. I will discuss how flexible filaments (such as flagella) can store energy from a viscoelastic fluid to gain stroke boosts due to fluid elasticity. I will also describe 3D simulations of model organisms such as C. Reinhardtii and mammalian sperm, where we use experimentally measured stroke data to separate naturally coupled stroke and fluid effects. We explore why strokes that are adapted to Newtonian fluid environments might not do well in viscoelastic environments.
A macaque connectome for simulating large-scale network dynamics in The VirtualBrain
TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.
Understanding "why": The role of causality in cognition
Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.
Mental Simulation, Imagination, and Model-Based Deep RL
Mental simulation—the capacity to imagine what will or what could be—is a salient feature of human cognition, playing a key role in a wide range of cognitive abilities. In artificial intelligence, the last few years have seen the development of methods which are analogous to mental models and mental simulation. In this talk, I will discuss recent methods in deep learning for constructing such models from data and learning to use them via reinforcement learning, and compare such approaches to human mental simulation. While a number of challenges remain in matching the capacity of human mental simulation, I will highlight some recent progress on developing more compositional and efficient model-based algorithms through the use of graph neural networks and tree search.
Precision and Temporal Stability of Directionality Inferences from Group Iterative Multiple Model Estimation (GIMME) Brain Network Models
The Group Iterative Multiple Model Estimation (GIMME) framework has emerged as a promising method for characterizing connections between brain regions in functional neuroimaging data. Two of the most appealing features of this framework are its ability to estimate the directionality of connections between network nodes and its ability to determine whether those connections apply to everyone in a sample (group-level) or just to one person (individual-level). However, there are outstanding questions about the validity and stability of these estimates, including: 1) how recovery of connection directionality is affected by features of data sets such as scan length and autoregressive effects, which may be strong in some imaging modalities (resting state fMRI, fNIRS) but weaker in others (task fMRI); and 2) whether inferences about directionality at the group and individual levels are stable across time. This talk will provide an overview of the GIMME framework and describe relevant results from a large-scale simulation study that assesses directionality recovery under various conditions and a separate project that investigates the temporal stability of GIMME’s inferences in the Human Connectome Project data set. Analyses from these projects demonstrate that estimates of directionality are most precise when autoregressive and cross-lagged relations in the data are relatively strong, and that inferences about the directionality of group-level connections, specifically, appear to be stable across time. Implications of these findings for the interpretation of directional connectivity estimates in different types of neuroimaging data will be discussed.
Computations in the basal ganglia
Procedural connectivity and other recent advances for efficient spiking neural network simulations
A generative network model of neurodevelopment
The emergence of large-scale brain networks, and their continual refinement, represent crucial developmental processes that can drive individual differences in cognition and which are associated with multiple neurodevelopmental conditions. But how does this organization arise, and what mechanisms govern the diversity of these developmental processes? There are many existing descriptive theories, but to date none are computationally formalized. We provide a mathematical framework that specifies the growth of a brain network over developmental time. Within this framework macroscopic brain organization, complete with spatial embedding of its organization, is an emergent property of a generative wiring equation that optimizes its connectivity by renegotiating its biological costs and topological values continuously over development. The rules that govern these iterative wiring properties are controlled by a set of tightly framed parameters, with subtle differences in these parameters steering network growth towards different neurodiverse outcomes. Regional expression of genes associated with the developmental simulations converge on biological processes and cellular components predominantly involved in synaptic signaling, neuronal projection, catabolic intracellular processes and protein transport. Together, this provides a unifying computational framework for conceptualizing the mechanisms and diversity of childhood brain development, capable of integrating different levels of analysis – from genes to cognition. (Pre-print: https://www.biorxiv.org/content/10.1101/2020.08.13.249391v1)
Accelerating bio-plausible spiking simulations on the Graphcore IPU
Bernstein Conference 2024
A connectome manipulation framework for the systematic and reproducible study of structure-function relationships through simulations
Bernstein Conference 2024
Building mechanistic models of neural computations with simulation-based machine learning
Bernstein Conference 2024
Bridging biophysics and computation with differentiable simulation
Bernstein Conference 2024
Enhanced simulations of whole-brain dynamics using hybrid resting-state structural connectomes
Bernstein Conference 2024
OpenEyeSim 2.1: Rendering Depth-of-Field and Chromatic Aberration Faster than Real-Time Simulations of Visual Accommodation
Bernstein Conference 2024
Parameter specification in spiking neural networks using simulation-based inference
Bernstein Conference 2024
Plastic Arbor: a modern simulation framework for synaptic plasticity – from single synapses to networks of morphological neurons
Bernstein Conference 2024
Single-cell morphological data provide refined simulations of resting-state
Bernstein Conference 2024
Super-Oscillators: Simulation-based inference for estimating alpha rhythm model parameters for high SNR recordings
Bernstein Conference 2024
Tracking the provenance of data generation and analysis in NEST simulations
Bernstein Conference 2024
Connectome simulations reveal a putative central pattern generator microcircuit for fly walking
COSYNE 2025
FARMS: Framework for Animal and Robot Modeling and Simulation
COSYNE 2025
Functional connectivity constrained simulations of visuomotor circuits in zebrafish
COSYNE 2025
A musculoskeletal simulation of Drosophila to study the biomechanics of limb movements
COSYNE 2025
Cerebellum and emotions: A journey from evidence to computational modeling and simulation
FENS Forum 2024
Computation with neuronal cultures: Effects of connectivity modularity on response separation and generalisation in simulations and experiments
FENS Forum 2024
A connectome manipulation framework for the systematic and reproducible study of structure-function relationships through simulations
FENS Forum 2024
Estimation of neuronal biophysical parameters in the presence of experimental noise using computer simulations and probabilistic inference methods
FENS Forum 2024
Evaluating the spread of excitation with different types of optogenetic cochlear stimulation through computer simulations and in vivo electrophysiology
FENS Forum 2024
Exploiting network topology in brain-scale multi-area model simulations
FENS Forum 2024
Eyes on the future: Unveiling mental simulations as a deliberative decision-making mechanism
FENS Forum 2024
Hypatia Health: A new open-source, online platform for computational simulation and fitting
FENS Forum 2024
Investigating visual-guided behaviour in mice by 3D simulation
FENS Forum 2024
Local field potential simulation across a V1 cortical model
FENS Forum 2024
Modulation of brain commands and spinal pathways in human upper limb control in various gravity conditions; insights from neuromusculoskeletal simulation
FENS Forum 2024
A novel technique for dramatically reducing computational burden in electrophysiological axon simulations
FENS Forum 2024
Patient-specific EEG simulation of focal and generalized epilepsy with a virtual human brain based on neurophysiology
FENS Forum 2024
Cleo: a simulation testbed for bridging model and experiment in mesoscale neuroscience
Neuromatch 5
Intelligence Offloading and the Neurosimulation of Developmental Agents
Neuromatch 5
Neural simulations in the Brian ecosystem
Neuromatch 5