← Back

Simulations

Topic spotlight
TopicWorld Wide

simulations

Discover seminars, jobs, and research tagged with simulations across World Wide.
76 curated items59 Seminars16 ePosters1 Position
Updated 1 day ago
76 items · simulations
76 results
Position

Prof. Jakob Macke

University Tübingen
Tübingen, Germany
Dec 5, 2025

The Mackelab (Prof. Jakob Macke, University Tübingen) is looking for PhD, Postdoc and Scientific Programmer applicants interested in working with us on using deep learning to build, optimize and study mechanistic models of neural computations! In a first project, funded by the ERC Grant DeepCoMechTome, we want to make use of connectomic reconstructions of the fruit fly to build large-scale simulations of the fly brain that can explain visually driven behavior—see, e.g., our prior work with Srinivas Turaga’s group, described in Lappalainen et al., Nature, 2024. In a second project, funded by the DFG through the CRC Robust Vision, we want to use differentiable simulators of biophysical models (Deistler et al., 2024) to build data-driven models of visual processing in the retina. We are open to candidates who are more interested in neurobiological questions, as well as to ones more interested in machine learning aspects (e.g. training large-scale mechanistic neural networks, learning efficient emulators, coding frameworks for collaborative modelling, automated model discovery for mechanistic models, …) of these projects.

SeminarNeuroscience

AutoMIND: Deep inverse models for revealing neural circuit invariances

Richard Gao
Goethe University
Oct 1, 2025
SeminarPsychology

Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making

Izzy Wisher
Aarhus University
Feb 25, 2024

How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.

SeminarNeuroscience

Movement planning as a window into hierarchical motor control

Katja Kornysheva
Centre for Human Brain (CHBH) at the University of Birmingham, UK
Jun 14, 2023

The ability to organise one's body for action without having to think about it is taken for granted, whether it is handwriting, typing on a smartphone or computer keyboard, tying a shoelace or playing the piano. When compromised, e.g. in stroke, neurodegenerative and developmental disorders, the individuals’ study, work and day-to-day living are impacted with high societal costs. Until recently, indirect methods such as invasive recordings in animal models, computer simulations, and behavioural markers during sequence execution have been used to study covert motor sequence planning in humans. In this talk, I will demonstrate how multivariate pattern analyses of non-invasive neurophysiological recordings (MEG/EEG), fMRI, and muscular recordings, combined with a new behavioural paradigm, can help us investigate the structure and dynamics of motor sequence control before and after movement execution. Across paradigms, participants learned to retrieve and produce sequences of finger presses from long-term memory. Our findings suggest that sequence planning involves parallel pre-ordering of serial elements of the upcoming sequence, rather than a preparation of a serial trajectory of activation states. Additionally, we observed that the human neocortex automatically reorganizes the order and timing of well-trained movement sequences retrieved from memory into lower and higher-level representations on a trial-by-trial basis. This echoes behavioural transfer across task contexts and flexibility in the final hundreds of milliseconds before movement execution. These findings strongly support a hierarchical and dynamic model of skilled sequence control across the peri-movement phase, which may have implications for clinical interventions.

SeminarNeuroscience

Quasicriticality and the quest for a framework of neuronal dynamics

Leandro Jonathan Fosque
Beggs lab, IU Bloomington
May 2, 2023

Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.

SeminarPsychology

A Better Method to Quantify Perceptual Thresholds : Parameter-free, Model-free, Adaptive procedures

Julien Audiffren
University of Fribourg
Feb 28, 2023

The ‘quantification’ of perception is arguably both one of the most important and most difficult aspects of perception study. This is particularly true in visual perception, in which the evaluation of the perceptual threshold is a pillar of the experimental process. The choice of the correct adaptive psychometric procedure, as well as the selection of the proper parameters, is a difficult but key aspect of the experimental protocol. For instance, Bayesian methods such as QUEST, require the a priori choice of a family of functions (e.g. Gaussian), which is rarely known before the experiment, as well as the specification of multiple parameters. Importantly, the choice of an ill-fitted function or parameters will induce costly mistakes and errors in the experimental process. In this talk we discuss the existing methods and introduce a new adaptive procedure to solve this problem, named, ZOOM (Zooming Optimistic Optimization of Models), based on recent advances in optimization and statistical learning. Compared to existing approaches, ZOOM is completely parameter free and model-free, i.e. can be applied on any arbitrary psychometric problem. Moreover, ZOOM parameters are self-tuned, thus do not need to be manually chosen using heuristics (eg. step size in the Staircase method), preventing further errors. Finally, ZOOM is based on state-of-the-art optimization theory, providing strong mathematical guarantees that are missing from many of its alternatives, while being the most accurate and robust in real life conditions. In our experiments and simulations, ZOOM was found to be significantly better than its alternative, in particular for difficult psychometric functions or when the parameters when not properly chosen. ZOOM is open source, and its implementation is freely available on the web. Given these advantages and its ease of use, we argue that ZOOM can improve the process of many psychophysics experiments.

SeminarNeuroscienceRecording

Geometry of concept learning

Haim Sompolinsky
The Hebrew University of Jerusalem and Harvard University
Jan 3, 2023

Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.

SeminarNeuroscienceRecording

Network inference via process motifs for lagged correlation in linear stochastic processes

Alice Schwarze
Dartmouth College
Nov 16, 2022

A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.

SeminarNeuroscience

Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks

Denis Alevi
Berlin Institute of Technology (
Nov 2, 2022

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.

SeminarNeuroscienceRecording

Introducing dendritic computations to SNNs with Dendrify

Michalis Pagkalos
IMBB FORTH
Sep 6, 2022

Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.

SeminarNeuroscience

From Computation to Large-scale Neural Circuitry in Human Belief Updating

Tobias Donner
University Medical Center Hamburg-Eppendorf
Jun 28, 2022

Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.

SeminarNeuroscience

Optimal information loading into working memory in prefrontal cortex

Maté Lengyel
University of Cambridge, UK
Jun 21, 2022

Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit dynamics underlying working memory remain poorly understood, with different aspects of prefrontal cortical (PFC) responses explained by different putative mechanisms. By mathematical analysis, numerical simulations, and using recordings from monkey PFC, we investigate a critical but hitherto ignored aspect of working memory dynamics: information loading. We find that, contrary to common assumptions, optimal information loading involves inputs that are largely orthogonal, rather than similar, to the persistent activities observed during memory maintenance. Using a novel, theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading and we find that such dynamics emerge naturally as a dynamical strategy in task-optimized recurrent neural networks. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics, and reveals a normative principle underlying the widely observed phenomenon of dynamic coding in PFC.

SeminarNeuroscience

The Problem of Testimony

Ulrike Hahn
Birkbeck, University of London
May 3, 2022

The talk will detail work drawing on behavioural results, formal analysis, and computational modelling with agent-based simulations to unpack the scale of the challenge humans face when trying to work out and factor in the reliability of their sources. In particular, it is shown how and why this task admits of no easy solution in the context of wider communication networks, and how this will affect the accuracy of our beliefs. The implications of this for the shift in the size and topology of our communication networks through the uncontrolled rise of social media are discussed.

SeminarPhysics of LifeRecording

Non-regular behavior during the coalescence of liquid-like cellular aggregates

Haicen Yue
Emory University
Apr 24, 2022

The fusion of cell aggregates widely exists during biological processes such as development, tissue regeneration, and tumor invasion. Cellular spheroids (spherical cell aggregates) are commonly used to study this phenomenon. In previous studies, with approximated assumptions and measurements, researchers found that the fusion of two spheroids with some cell type is similar to the coalescence of two liquid droplets. However, with more accurate measurements focusing on the overall shape evolution in this process, we find that even in the previously-regarded liquid-like regime, the fusion process of spheroids can be very different from regular liquid coalescence. We conduct numerical simulations using both standard particulate models and vertex models with both Molecular Dynamics and Brownian Dynamics. The simulation results show that the difference between spheroids and regular liquid droplets is caused by the microscopic overdamped dynamics of each cell rather than the topological cell-cell interactions in the vertex model. Our research reveals the necessity of a new continuum theory for “liquid” with microscopically overdamped components, such as cellular and colloidal systems. Detailed analysis of our simulation results of different system sizes provides the basis for developing the new theory.

SeminarNeuroscienceRecording

Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations

Yul Kang
Lengyel lab, Cambridge University
Apr 5, 2022

To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.

SeminarOpen SourceRecording

GeNN

James Knight
University of Sussex
Mar 22, 2022

Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.

SeminarNeuroscience

Cognitive Maps

Kauê M. Costa
National Institute on Drug Abuse
Mar 2, 2022

Ample evidence suggests that the brain generates internal simulations of the outside world to guide our thoughts and actions. These mental representations, or cognitive maps, are thought to be essential for our very comprehension of reality. I will discuss what is known about the informational structure of cognitive maps, their neural underpinnings, and how they relate to behavior, evolution, disease, and the current revolution in artificial intelligence.

SeminarNeuroscienceRecording

NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule

Patricia Rubisch (she/her)
University of Edinburgh
Dec 1, 2021

Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.

SeminarNeuroscienceRecording

NMC4 Short Talk: A mechanism for inter-areal coherence through communication based on connectivity and oscillatory power

Marius Schneider
Ernst Strüngmann Institute for Neuroscience
Nov 30, 2021

Inter-areal coherence between cortical field-potentials is a widespread phenomenon and depends on numerous behavioral and cognitive factors. It has been hypothesized that inter-areal coherence reflects phase-synchronization between local oscillations and flexibly gates communication. We reveal an alternative mechanism, where coherence results from and is not the cause of communication, and naturally emerges as a consequence of the fact that spiking activity in a sending area causes post-synaptic inputs both in the same area and in other areas. Consequently, coherence depends in a lawful manner on oscillatory power and phase-locking in a sending area and inter-areal connectivity. We show that changes in oscillatory power explain prominent changes in fronto-parietal beta-coherence with movement and memory, and LGN-V1 gamma-coherence with arousal and visual stimulation. Optogenetic silencing of a receiving area and E/I network simulations demonstrate that afferent synaptic inputs rather than spiking entrainment are the main determinant of inter-areal coherence. These findings suggest that the unique spectral profiles of different brain areas automatically give rise to large-scale inter-areal coherence patterns that follow anatomical connectivity and continuously reconfigure as a function of behavior and cognition.

SeminarNeuroscienceRecording

The wonders and complexities of brain microstructure: Enabling biomedical engineering studies combining imaging and models

Daniele Dini
Imperial College London
Nov 22, 2021

Brain microstructure plays a key role in driving the transport of drug molecules directly administered to the brain tissue as in Convection-Enhanced Delivery procedures. This study reports the first systematic attempt to characterize the cytoarchitecture of commissural, long association and projection fiber, namely: the corpus callosum, the fornix and the corona radiata. Ovine samples from three different subjects have been imaged using scanning electron microscope combined with focused ion beam milling. Particular focus has been given to the axons. For each tract, a 3D reconstruction of relatively large volumes (including a significant number of axons) has been performed. Namely, outer axonal ellipticity, outer axonal cross-sectional area and its relative perimeter have been measured. This study [1] provides useful insight into the fibrous organization of the tissue that can be described as composite material presenting elliptical tortuous tubular fibers, leading to a workflow to enable accurate simulations of drug delivery which include well-resolved microstructural features.  As a demonstration of the use of these imaging and reconstruction techniques, our research analyses the hydraulic permeability of two white matter (WM) areas (corpus callosum and fornix) whose three-dimensional microstructure was reconstructed starting from the acquisition of the electron microscopy images. Considering that the white matter structure is mainly composed of elongated and parallel axons we computed the permeability along the parallel and perpendicular directions using computational fluid dynamics [2]. The results show a statistically significant difference between parallel and perpendicular permeability, with a ratio about 2 in both the white matter structures analysed, thus demonstrating their anisotropic behaviour. This is in line with the experimental results obtained using perfusion of brain matter [3]. Moreover, we find a significant difference between permeability in corpus callosum and fornix, which suggests that also the white matter heterogeneity should be considered when modelling drug transport in the brain. Our findings, that demonstrate and quantify the anisotropic and heterogeneous character of the white matter, represent a fundamental contribution not only for drug delivery modelling but also for shedding light on the interstitial transport mechanisms in the extracellular space. These and many other discoveries will be discussed during the talk." "1. https://www.researchsquare.com/article/rs-686577/v1, 2. https://www.pnas.org/content/118/36/e2105328118, 3. https://ieeexplore.ieee.org/abstract/document/9198110

SeminarNeuroscience

Networking—the key to success… especially in the brain

Alexander Dunn
University of Cambridge, DAMTP
Nov 16, 2021

In our everyday lives, we form connections and build up social networks that allow us to function successfully as individuals and as a society. Our social networks tend to include well-connected individuals who link us to other groups of people that we might otherwise have limited access to. In addition, we are more likely to befriend individuals who a) live nearby and b) have mutual friends. Interestingly, neurons tend to do the same…until development is perturbed. Just like social networks, neuronal networks require highly connected hubs to elicit efficient communication at minimal cost (you can’t befriend everybody you meet, nor can every neuron wire with every other!). This talk will cover some of Alex’s work showing that microscopic (cellular scale) brain networks inferred from spontaneous activity show similar complex topology to that previously described in macroscopic human brain scans. The talk will also discuss what happens when neurodevelopment is disrupted in the case of a monogenic disorder called Rett Syndrome. This will include simulations of neuronal activity and the effects of manipulation of model parameters as well as what happens when we manipulate real developing networks using optogenetics. If functional development can be restored in atypical networks, this may have implications for treatment of neurodevelopmental disorders like Rett Syndrome.

SeminarNeuroscienceRecording

Understanding the Invisibility of Scotomas: Novel Simulations

Eli Peli
Harvard
Nov 15, 2021
SeminarNeuroscience

Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models

Tatjana Tchumatcheko
University of Bonn
Nov 9, 2021

The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.

SeminarNeuroscience

Neuropunk revolution and its implementation via real-time neurosimulations and their integrations

Maxim Talanov
B-Rain Labs LLC, ITIS KFU
Oct 20, 2021

In this talk I present the perspectives of the "neuropunk revolution'' technologies. One could understand the "neuropunk revolution'' as the integration of real-time neurosimulations into biological nervous/motor systems via neurostimulation or artificial robotic systems via integration with actuators. I see the added value of the real-time neurosimulations as bridge technology for the set of developed technologies: BCI, neuroprosthetics, AI, robotics to provide bio-compatible integration into biological or artificial limbs. Here I present the three types of integration of the "neuropunk revolution'' technologies as inbound, outbound and closed-loop in-outbound systems. I see the shift of the perspective of how we see now the set of technologies including AI, BCI, neuroprosthetics and robotics due to the proposed concept for example the integration of external to a body simulated part of the nervous system back into the biological nervous system or muscles.

SeminarNeuroscienceRecording

Beyond the binding problem: From basic affordances to symbolic thought

John E. Hummel
University of Illinois
Sep 29, 2021

Human cognitive abilities seem qualitatively different from the cognitive abilities of other primates, a difference Penn, Holyoak, and Povinelli (2008) attribute to role-based relational reasoning—inferences and generalizations based on the relational roles to which objects (and other relations) are bound, rather than just the features of the objects themselves. Role-based relational reasoning depends on the ability to dynamically bind arguments to relational roles. But dynamic binding cannot be sufficient for relational thinking: Some non-human animals solve the dynamic binding problem, at least in some domains; and many non-human species generalize affordances to completely novel objects and scenes, a kind of universal generalization that likely depends on dynamic binding. If they can solve the dynamic binding problem, then why can they not reason about relations? What are they missing? I will present simulations with the LISA model of analogical reasoning (Hummel & Holyoak, 1997, 2003) suggesting that the missing pieces are multi-role integration (the capacity to combine multiple role bindings into complete relations) and structure mapping (the capacity to map different systems of role bindings onto one another). When LISA is deprived of either of these capacities, it can still generalize affordances universally, but it cannot reason symbolically; granted both abilities, LISA enjoys the full power of relational (symbolic) thought. I speculate that one reason it may have taken relational reasoning so long to evolve is that it required evolution to solve both problems simultaneously, since neither multi-role integration nor structure mapping appears to confer any adaptive advantage over simple role binding on its own.

SeminarPhysics of LifeRecording

How polymer-loop-extruding motors shape chromosomes

Ed Banigan
MIT
Sep 12, 2021

Chromosomes are extremely long, active polymers that are spatially organized across multiple scales to promote cellular functions, such as gene transcription and genetic inheritance. During each cell cycle, chromosomes are dramatically compacted as cells divide and dynamically reorganized into less compact, spatiotemporally patterned structures after cell division. These activities are facilitated by DNA/chromatin-binding protein motors called SMC complexes. Each of these motors can perform a unique activity known as “loop extrusion,” in which the motor binds the DNA/chromatin polymer, reels in the polymer fiber, and extrudes it as a loop. Using simulations and theory, I show how loop-extruding motors can collectively compact and spatially organize chromosomes in different scenarios. First, I show that loop-extruding complexes can generate sufficient compaction for cell division, provided that loop-extrusion satisfies stringent physical requirements. Second, while loop-extrusion alone does not uniquely spatially pattern the genome, interactions between SMC complexes and protein “boundary elements” can generate patterns that emerge in the genome after cell division. Intriguingly, these “boundary elements” are not necessarily stationary, which can generate a variety of patterns in the neighborhood of transcriptionally active genes. These predictions, along with supporting experiments, show how SMC complexes and other molecular machinery, such as RNA polymerase, can spatially organize the genome. More generally, this work demonstrates both the versatility of the loop extrusion mechanism for chromosome functional organization and how seemingly subtle microscopic effects can emerge in the spatiotemporal structure of nonequilibrium polymers.

SeminarPhysics of Life

Coordinated motion of active filaments on spherical surfaces

Eric Keaveny
Imperial College London
Jul 6, 2021

Filaments (slender, microscopic elastic bodies) are prevalent in biological and industrial settings. In the biological case, the filaments are often active, in that they are driven internally by motor proteins, with the prime examples being cilia and flagella. For cilia in particular, which can appear in dense arrays, their resulting motions are coupled through the surrounding fluid, as well as through surfaces to which they are attached. In this talk, I present numerical simulations exploring the coordinated motion of active filaments and how it depends on the driving force, density of filaments, as well as the attached surface. In particular, we find that when the surface is spherical, its topology introduces local defects in coordinated motion which can then feedback and alter the global state. This is particularly true when the surface is not held fixed and is free to move in the surrounding fluid. These simulations take advantage of a computational framework we developed for fully 3D filament motion that combines unit quaternions, implicit geometric time integration, quasi-Newton methods, and fast, matrix-free methods for hydrodynamic interactions and it will also be presented.

SeminarNeuroscienceRecording

An in-silico framework to study the cholinergic modulation of the neocortex

Cristina Colangelo
EPFL, Blue Brain Project
Jun 29, 2021

Neuromodulators control information processing in cortical microcircuits by regulating the cellular and synaptic physiology of neurons. Computational models and detailed simulations of neocortical microcircuitry offer a unifying framework to analyze the role of neuromodulators on network activity. In the present study, to get a deeper insight in the organization of the cortical neuropil for modeling purposes, we quantify the fiber length per cortical volume and the density of varicosities for catecholaminergic, serotonergic and cholinergic systems using immunocytochemical staining and stereological techniques. The data obtained are integrated into a biologically detailed digital reconstruction of the rodent neocortex (Markram et al, 2015) in order to model the influence of modulatory systems on the activity of the somatosensory cortex neocortical column. Simulations of ascending modulation of network activity in our model predict the effects of increasing levels of neuromodulators on diverse neuron types and synapses and reveal a spectrum of activity states. Low levels of neuromodulation drive microcircuit activity into slow oscillations and network synchrony, whereas high neuromodulator concentrations govern fast oscillations and network asynchrony. The models and simulations thus provide a unifying in silico framework to study the role of neuromodulators in reconfiguring network activity.

SeminarNeuroscience

Capacitance clamp - artificial capacitance in biological neurons via dynamic clamp

Paul Pfeiffer
Schreiber lab, Humboldt University Berlin, Germany
Jun 9, 2021

A basic time scale in neural dynamics from single cells to the network level is the membrane time constant - set by a neuron’s input resistance and its capacitance. Interestingly, the membrane capacitance appears to be more dynamic than previously assumed with implications for neural function and pathology. Indeed, altered membrane capacitance has been observed in reaction to physiological changes like neural swelling, but also in ageing and Alzheimer's disease. Importantly, according to theory, even small changes of the capacitance can affect neuronal signal processing, e.g. increase network synchronization or facilitate transmission of high frequencies. In experiment, robust methods to modify the capacitance of a neuron have been missing. Here, we present the capacitance clamp - an electrophysiological method for capacitance control based on an unconventional application of the dynamic clamp. In its original form, dynamic clamp mimics additional synaptic or ionic conductances by injecting their respective currents. Whereas a conductance directly governs a current, the membrane capacitance determines how fast the voltage responds to a current. Accordingly, capacitance clamp mimics an altered capacitance by injecting a dynamic current that slows down or speeds up the voltage response (Fig 1 A). For the required dynamic current, the experimenter only has to specify the original cell and the desired target capacitance. In particular, capacitance clamp requires no detailed model of present conductances and thus can be applied in every excitable cell. To validate the capacitance clamp, we performed numerical simulations of the protocol and applied it to modify the capacitance of cultured neurons. First, we simulated capacitance clamp in conductance based neuron models and analysed impedance and firing frequency to verify the altered capacitance. Second, in dentate gyrus granule cells from rats, we could reliably control the capacitance in a range of 75 to 200% of the original capacitance and observed pronounced changes in the shape of the action potentials: increasing the capacitance reduced after-hyperpolarization amplitudes and slowed down repolarization. To conclude, we present a novel tool for electrophysiology: the capacitance clamp provides reliable control over the capacitance of a neuron and thereby opens a new way to study the temporal dynamics of excitable cells.

SeminarNeuroscience

From 1D to 5D: Data-driven Discovery of Whole-brain Dynamic Connectivity in fMRI Data

Vince Calhoun
Founding Director, Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA
May 19, 2021

The analysis of functional magnetic resonance imaging (fMRI) data can greatly benefit from flexible analytic approaches. In particular, the advent of data-driven approaches to identify whole-brain time-varying connectivity and activity has revealed a number of interesting relevant variation in the data which, when ignored, can provide misleading information. In this lecture I will provide a comparative introduction of a range of data-driven approaches to estimating time-varying connectivity. I will also present detailed examples where studies of both brain health and disorder have been advanced by approaches designed to capture and estimate time-varying information in resting fMRI data. I will review several exemplar data sets analyzed in different ways to demonstrate the complementarity as well as trade-offs of various modeling approaches to answer questions about brain function. Finally, I will review and provide examples of strategies for validating time-varying connectivity including simulations, multimodal imaging, and comparative prediction within clinical populations, among others. As part of the interactive aspect I will provide a hands-on guide to the dynamic functional network connectivity toolbox within the GIFT software, including an online didactic analytic decision tree to introduce the various concepts and decisions that need to be made when using such tools

SeminarPhysics of LifeRecording

Energy landscapes, order and disorder, and protein sequence coevolution: From proteins to chromosome structure

Jose Onuchic
Rice University
May 13, 2021

In vivo, the human genome folds into a characteristic ensemble of 3D structures. The mechanism driving the folding process remains unknown. A theoretical model for chromatin (the minimal chromatin model) explains the folding of interphase chromosomes and generates chromosome conformations consistent with experimental data is presented. The energy landscape of the model was derived by using the maximum entropy principle and relies on two experimentally derived inputs: a classification of loci into chromatin types and a catalog of the positions of chromatin loops. This model was generalized by utilizing a neural network to infer these chromatin types using epigenetic marks present at a locus, as assayed by ChIP-Seq. The ensemble of structures resulting from these simulations completely agree with HI-C data and exhibits unknotted chromosomes, phase separation of chromatin types, and a tendency for open chromatin to lie at the periphery of chromosome territories. Although this theoretical methodology was trained in one cell line, the human GM12878 lymphoblastoid cells, it has successfully predicted the structural ensembles of multiple human cell lines. Finally, going beyond Hi-C, our predicted structures are also consistent with microscopy measurements. Analysis of both structures from simulation and microscopy reveals that short segments of chromatin make two-state transitions between closed conformations and open dumbbell conformations. For gene active segments, the vast majority of genes appear clustered in the linker region of the chromatin segment, allowing us to speculate possible mechanisms by which chromatin structure and dynamics may be involved in controlling gene expression. * Supported by the NSF

SeminarNeuroscience

Understanding "why": The role of causality in cognition

Tobias Gerstenberg
Stanford University
Apr 27, 2021

Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.

SeminarNeuroscienceRecording

Procedural connectivity and other recent advances for efficient spiking neural network simulations

Thomas Nowotny
University of Sussex
Mar 10, 2021
SeminarNeuroscience

A generative n​etwork model of neurodevelopment

Danyal Akarca
University of Cambridge, MRC Cognition and Brain Sciences Unit
Feb 23, 2021

The emergence of large-scale brain networks, and their continual refinement, represent crucial developmental processes that can drive individual differences in cognition and which are associated with multiple neurodevelopmental conditions. But how does this organization arise, and what mechanisms govern the diversity of these developmental processes? There are many existing descriptive theories, but to date none are computationally formalized. We provide a mathematical framework that specifies the growth of a brain network over developmental time. Within this framework macroscopic brain organization, complete with spatial embedding of its organization, is an emergent property of a generative wiring equation that optimizes its connectivity by renegotiating its biological costs and topological values continuously over development. The rules that govern these iterative wiring properties are controlled by a set of tightly framed parameters, with subtle differences in these parameters steering network growth towards different neurodiverse outcomes. Regional expression of genes associated with the developmental simulations converge on biological processes and cellular components predominantly involved in synaptic signaling, neuronal projection, catabolic intracellular processes and protein transport. Together, this provides a unifying computational framework for conceptualizing the mechanisms and diversity of childhood brain development, capable of integrating different levels of analysis – from genes to cognition. (Pre-print: https://www.biorxiv.org/content/10.1101/2020.08.13.249391v1)

SeminarPhysics of LifeRecording

Mixed active-passive suspensions: from particle entrainment to spontaneous demixing

Marco Polin
University Warwick
Feb 16, 2021

Understanding the properties of active matter is a challenge which is currently driving a rapid growth in soft- and bio-physics. Some of the most important examples of active matter are at the microscale, and include active colloids and suspensions of microorganisms, both as a simple active fluid (single species) and as mixed suspensions of active and passive elements. In this last class of systems, recent experimental and theoretical work has started to provide a window into new phenomena including activity-induced depletion interactions, phase separation, and the possibility to extract net work from active suspensions. Here I will present our work on a paradigmatic example of mixed active-passive system, where the activity is provided by swimming microalgae. Macro- and micro-scopic experiments reveal that microorganism-colloid interactions are dominated by rare close encounters leading to large displacements through direct entrainment. Simulations and theoretical modelling show that the ensuing particle dynamics can be understood in terms of a simple jump-diffusion process, combining standard diffusion with Poisson-distributed jumps. Entrainment length can be understood within the framework of Taylor dispersion as a competition between advection by the no-slip surface of the cell body and microparticle diffusion. Building on these results, we then ask how external control of the dynamics of the active component (e.g. induced microswimmer anisotropy/inhomogeneity) can be used to alter the transport of passive cargo. As a first step in this direction, we study the behaviour of mixed active-passive systems in confinement. The resulting spatial inhomogeneity in swimmers’ distribution and orientation has a dramatic effect on the spatial distribution of passive particles, with the colloids accumulating either towards the boundaries or towards the bulk of the sample depending on the size of the container. We show that this can be used to induce the system to de-mix spontaneously.

SeminarPhysics of LifeRecording

The physics of cement cohesion

Emanuela Del Gado
Georgetown University
Jan 26, 2021

Cement is the main binding agent in concrete, literally gluing together rocks and sand into the most-used synthetic material on Earth. However, cement production is responsible for significant amounts of man- made greenhouse gases—in fact if the cement industry were a country, it would be the third largest emitter in the world. Alternatives to the current, environmentally harmful cement production process are not available essentially because the gaps in fundamental understanding hamper the development of smarter and more sustainable solutions. The ultimate challenge is to link the chemical composition of cement grains to the nanoscale physics of the cohesive forces that emerge when mixing cement with water. Cement nanoscale cohesion originates from the electrostatics of ions accumulated in a water-based solution between like-charged surfaces but it is not captured by existing theories because of the nature of the ions involved and the high surface charges. Surprisingly enough, this is also the case for unexplained cohesion in a range of colloidal and biological matter. About one century after the early studies of cement hydration, we have quantitatively solved this notoriously hard problem and discovered how cement cohesion develops during hydration. I will discuss how 3D numerical simulations that feature a simple but molecular description of ions and water, together with an analytical theory that goes beyond the traditional continuum approximations, helped us demonstrate that the optimized interlocking of ion-water structures determine the net cohesive forces and their evolution. These findings open the path to scientifically grounded strategies of material design for cements and have implications for a much wider range of materials and systems where ionic water-based solutions feature both strong Coulombic and confinement effects, ranging from biological membranes to soils. Construction materials are central to our society and to our life as humans on this planet, but usually far removed from fundamental science. We can now start to understand how cement physical-chemistry determines performance, durability and sustainability.

SeminarNeuroscienceRecording

Multitask performance humans and deep neural networks

Christopher Summerfield
University of Oxford
Nov 24, 2020

Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.

SeminarNeuroscienceRecording

The emergence of contrast invariance in cortical circuits

Tatjana Tchumatchenko
Max Planck Institute for Brain Research
Nov 12, 2020

Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.

SeminarPhysics of Life

1. Binding pathway of a proline-rich SH3 partner peptide from simulations and NMR, 2. The Role of LLPS in the Diverse Functions of the Nucleolus

1. Lia Ball, 2. Richard Kriwacki
1. Skidmore College, 2. St Jude Children's Research Hospital
Nov 4, 2020
SeminarNeuroscience

Towards multipurpose biophysics-based mathematical models of cortical circuits

Gaute Einevoll
Norwegian University of Life Sciences
Oct 13, 2020

Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.

SeminarPhysics of LifeRecording

Adhering, wrapping, and bursting of lipid bilayer membranes: understanding effects of membrane-binding particles and polymers

Anthony Dinsmore
University of Massachusettes Amherst
Sep 29, 2020

Proteins and membranes form remarkably complex structures that are key to intracellular compartmentalization, cargo transport, and cell morphology. Despite this wealth of examples in living systems, we still lack design principles for controlling membrane morphology in synthetic systems. With experiments and simulations, we show that even the simple case of spherical or rod-shaped nanoparticles binding to lipid-bilayer membrane vesicles results in a remarkably rich set of morphologies that can be reliably controlled via the particle binding energy. When the binding energy is weak relative to a characteristic membrane-bending energy, vesicles adhere to one another and form a soft solid gel, which is a useful platform for controlled release. With larger binding energy, a transition from partial to complete wrapping of the nanoparticles causes a remarkable vesicle destruction process culminating in rupture, nanoparticle-membrane tubules, and vesicle inversion. We have explored the behavior across a wide range of parameter space. These findings help unify the wide range of effects observed when vesicles or cells are exposed to nanoparticles. They also show how they open the door to a new class of vesicle-based, closed-cell gels that are more than 99% water and can encapsulate and release on demand. I will discuss how triggering membrane remodeling could lead to shape-responsive systems in the future.

SeminarPhysics of LifeRecording

Swimming in the third domain: archaeal extremophiles

Laurence Wilson
University of York
Aug 17, 2020

Archaea have evolved to survive in some of the most extreme environments on earth. Life in extreme, nutrient-poor conditions gives the opportunity to probe fundamental energy limitations on movement and response to stimuli, two essential markers of living systems. Here we use three-dimensional holographic microscopy and computer simulations to show that halophilic archaea achieve chemotaxis with power requirements one hundred-fold lower than common eubacterial model systems. Their swimming direction is stabilised by their flagella (archaella), enhancing directional persistence in a manner similar to that displayed by eubacteria, albeit with a different motility apparatus. Our experiments and simulations reveal that the cells are capable of slow but deterministic chemotaxis up a chemical gradient, in a biased random walk at the thermodynamic limit.

SeminarNeuroscience

Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus

Andrea Navas-Olive
Instituto Cajal CSIC
Jul 18, 2020

The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates. " "The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown." "Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.

SeminarNeuroscienceRecording

Neural Engineering: Building large-scale cognitive models of the brain

Terry Stewart
National Research Council of Canada and University of Waterloo Collaboration Centre
Jun 30, 2020

The Neural Engineering Framework has been used to create a wide variety of biologically realistic brain simulations that are capable of performing simple cognitive tasks (remembering a list, counting, etc.). This includes the largest existing functional brain model. This talk will describe this method, and show some examples of using it to take high-level cognitive algorithms and convert them into a neural network that implements those algorithms. Overall, this approach gives us new ways of thinking about how the brain works and what sorts of algorithms it is capable of performing.

ePoster

Accelerating bio-plausible spiking simulations on the Graphcore IPU

Catherine Schöfmann, Jan Finkbeiner, Susanne Kunkel

Bernstein Conference 2024

ePoster

A connectome manipulation framework for the systematic and reproducible study of structure-function relationships through simulations

Christoph Pokorny, Omar Awile, James Isbister, Kerem Kurban, Matthias Wolf, Michael Reimann

Bernstein Conference 2024

ePoster

Enhanced simulations of whole-brain dynamics using hybrid resting-state structural connectomes

Thanos Manos, Sandra Diaz-Pier, Igor Fortel, Ira Driscoll, Liang Zhan, Alex Leow

Bernstein Conference 2024

ePoster

OpenEyeSim 2.1: Rendering Depth-of-Field and Chromatic Aberration Faster than Real-Time Simulations of Visual Accommodation

Judith Massmann, Alexander Lichtenstein, Francisco López, Bertram Shi, Jochen Triesch

Bernstein Conference 2024

ePoster

Single-cell morphological data provide refined simulations of resting-state

Penghao Qian, Linus Manubens-Gil, Hanchuan Peng

Bernstein Conference 2024

ePoster

Tracking the provenance of data generation and analysis in NEST simulations

Cristiano Köhler, Moritz Kern, Sonja Grün, Michael Denker

Bernstein Conference 2024

ePoster

Connectome simulations reveal a putative central pattern generator microcircuit for fly walking

Sarah Pugliese, John Tuthill, Bing Brunton

COSYNE 2025

ePoster

Functional connectivity constrained simulations of visuomotor circuits in zebrafish

Kaitlyn Fouke, Jacob Morra, Auke Ijspeert, Eva Naumann

COSYNE 2025

ePoster

Computation with neuronal cultures: Effects of connectivity modularity on response separation and generalisation in simulations and experiments

Akke Mats Houben, Anna-Christina Haeb, Jordi Garcia-Ojalvo, Jordi Soriano

FENS Forum 2024

ePoster

A connectome manipulation framework for the systematic and reproducible study of structure-function relationships through simulations

Christoph Pokorny, Omar Awile, James B. Isbister, Matthias Wolf, Michael W. Reimann

FENS Forum 2024

ePoster

Estimation of neuronal biophysical parameters in the presence of experimental noise using computer simulations and probabilistic inference methods

Dániel Terbe, Balázs Szabó, Szabolcs Káli

FENS Forum 2024

ePoster

Evaluating the spread of excitation with different types of optogenetic cochlear stimulation through computer simulations and in vivo electrophysiology

Elisabeth Koert, Jonathan Götz, Bettina Wolf, Tobias Moser

FENS Forum 2024

ePoster

Exploiting network topology in brain-scale multi-area model simulations

Melissa Lober, Markus Diesmann, Susanne Kunkel

FENS Forum 2024

ePoster

Eyes on the future: Unveiling mental simulations as a deliberative decision-making mechanism

Karla Padilla, Samuel Madariaga, Catalina Murúa, Pedro Maldonado

FENS Forum 2024

ePoster

A novel technique for dramatically reducing computational burden in electrophysiological axon simulations

Javier Garcia Ordonez, Taylor Newton, Esra Neufeld, Niels Kuster

FENS Forum 2024

ePoster

Neural simulations in the Brian ecosystem

Marcel Stimberg

Neuromatch 5