← Back

Biophysical

Topic spotlight
TopicWorld Wide

biophysical

Discover seminars, jobs, and research tagged with biophysical across World Wide.
64 curated items41 Seminars23 ePosters
Updated about 2 years ago
64 items · biophysical
64 results
SeminarNeuroscience

Learning to Express Reward Prediction Error-like Dopaminergic Activity Requires Plastic Representations of Time

Harel Shouval
The University of Texas at Houston
Jun 13, 2023

The dominant theoretical framework to account for reinforcement learning in the brain is temporal difference (TD) reinforcement learning. The TD framework predicts that some neuronal elements should represent the reward prediction error (RPE), which means they signal the difference between the expected future rewards and the actual rewards. The prominence of the TD theory arises from the observation that firing properties of dopaminergic neurons in the ventral tegmental area appear similar to those of RPE model-neurons in TD learning. Previous implementations of TD learning assume a fixed temporal basis for each stimulus that might eventually predict a reward. Here we show that such a fixed temporal basis is implausible and that certain predictions of TD learning are inconsistent with experiments. We propose instead an alternative theoretical framework, coined FLEX (Flexibly Learned Errors in Expected Reward). In FLEX, feature specific representations of time are learned, allowing for neural representations of stimuli to adjust their timing and relation to rewards in an online manner. In FLEX dopamine acts as an instructive signal which helps build temporal models of the environment. FLEX is a general theoretical framework that has many possible biophysical implementations. In order to show that FLEX is a feasible approach, we present a specific biophysically plausible model which implements the principles of FLEX. We show that this implementation can account for various reinforcement learning paradigms, and that its results and predictions are consistent with a preponderance of both existing and reanalyzed experimental data.

SeminarNeuroscienceRecording

Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties

Ilenna Jones
University of Pennsylvania
Dec 6, 2022

Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.

SeminarNeuroscienceRecording

A biologically plausible inhibitory plasticity rule for world-model learning in SNNs

Z. Liao
Columbia
Nov 9, 2022

Memory consolidation is the process by which recent experiences are assimilated into long-term memory. In animals, this process requires the offline replay of sequences observed during online exploration in the hippocampus. Recent experimental work has found that salient but task-irrelevant stimuli are systematically excluded from these replay epochs, suggesting that replay samples from an abstracted model of the world, rather than verbatim previous experiences. We find that this phenomenon can be explained parsimoniously and biologically plausibly by a Hebbian spike time-dependent plasticity rule at inhibitory synapses. Using spiking networks at three levels of abstraction–leaky integrate-and-fire, biophysically detailed, and abstract binary–we show that this rule enables efficient inference of a model of the structure of the world. While plasticity has previously mainly been studied at excitatory synapses, we find that plasticity at excitatory synapses alone is insufficient to accomplish this type of structural learning. We present theoretical results in a simplified model showing that in the presence of Hebbian excitatory and inhibitory plasticity, the replayed sequences form a statistical estimator of a latent sequence, which converges asymptotically to the ground truth. Our work outlines a direct link between the synaptic and cognitive levels of memory consolidation, and highlights a potential conceptually distinct role for inhibition in computing with SNNs.

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 8, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscienceRecording

Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions

Kevin Berlemont
Wang Lab, NYU Center for Neural Science
Sep 20, 2022

Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.

SeminarNeuroscienceRecording

Introducing dendritic computations to SNNs with Dendrify

Michalis Pagkalos
IMBB FORTH
Sep 6, 2022

Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.

SeminarNeuroscience

From Computation to Large-scale Neural Circuitry in Human Belief Updating

Tobias Donner
University Medical Center Hamburg-Eppendorf
Jun 28, 2022

Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.

SeminarNeuroscienceRecording

Retinal responses to natural inputs

Fred Rieke
University of Washington
Apr 17, 2022

The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.

SeminarNeuroscience

Multiscale modeling of brain states, from spiking networks to the whole brain

Alain Destexhe
Centre National de la Recherche Scientifique and Paris-Saclay University
Apr 5, 2022

Modeling brain mechanisms is often confined to a given scale, such as single-cell models, network models or whole-brain models, and it is often difficult to relate these models. Here, we show an approach to build models across scales, starting from the level of circuits to the whole brain. The key is the design of accurate population models derived from biophysical models of networks of excitatory and inhibitory neurons, using mean-field techniques. Such population models can be later integrated as units in large-scale networks defining entire brain areas or the whole brain. We illustrate this approach by the simulation of asynchronous and slow-wave states, from circuits to the whole brain. At the mesoscale (millimeters), these models account for travelling activity waves in cortex, and at the macroscale (centimeters), the models reproduce the synchrony of slow waves and their responsiveness to external stimuli. This approach can also be used to evaluate the impact of sub-cellular parameters, such as receptor types or membrane conductances, on the emergent behavior at the whole-brain level. This is illustrated with simulations of the effect of anesthetics. The program codes are open source and run in open-access platforms (such as EBRAINS).

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 22, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscienceRecording

NMC4 Short Talk: Systematic exploration of neuron type differences in standard plasticity protocols employing a novel pathway based plasticity rule

Patricia Rubisch (she/her)
University of Edinburgh
Dec 1, 2021

Spike Timing Dependent Plasticity (STDP) is argued to modulate synaptic strength depending on the timing of pre- and postsynaptic spikes. Physiological experiments identified a variety of temporal kernels: Hebbian, anti-Hebbian and symmetrical LTP/LTD. In this work we present a novel plasticity model, the Voltage-Dependent Pathway Model (VDP), which is able to replicate those distinct kernel types and intermediate versions with varying LTP/LTD ratios and symmetry features. In addition, unlike previous models it retains these characteristics for different neuron models, which allows for comparison of plasticity in different neuron types. The plastic updates depend on the relative strength and activation of separately modeled LTP and LTD pathways, which are modulated by glutamate release and postsynaptic voltage. We used the 15 neuron type parametrizations in the GLIF5 model presented by Teeter et al. (2018) in combination with the VDP to simulate a range of standard plasticity protocols including standard STDP experiments, frequency dependency experiments and low frequency stimulation protocols. Slight variation in kernel stability and frequency effects can be identified between the neuron types, suggesting that the neuron type may have an effect on the effective learning rule. This plasticity model builds a middle ground between biophysical and phenomenological models allowing not just for the combination with more complex and biophysical neuron models, but is also computationally efficient so can be used in network simulations. Therefore it offers the possibility to explore the functional role of the different kernel types and electrophysiological differences in heterogeneous networks in future work.

SeminarNeuroscienceRecording

NMC4 Short Talk: Resilience through diversity: Loss of neuronal heterogeneity in epileptogenic human tissue impairs network resilience to sudden changes in synchrony

Scott Rich
Kremibl Brain Institute
Nov 30, 2021

A myriad of pathological changes associated with epilepsy, including the loss of specific cell types, improper expression of individual ion channels, and synaptic sprouting, can be recast as decreases in cell and circuit heterogeneity. In recent experimental work, we demonstrated that biophysical diversity is a key characteristic of human cortical pyramidal cells, and past theoretical work has shown that neuronal heterogeneity improves a neural circuit’s ability to encode information. Viewed alongside the fact that seizure is an information-poor brain state, these findings motivate the hypothesis that epileptogenesis can be recontextualized as a process where reduction in cellular heterogeneity renders neural circuits less resilient to seizure onset. By comparing whole-cell patch clamp recordings from layer 5 (L5) human cortical pyramidal neurons from epileptogenic and non-epileptogenic tissue, we present the first direct experimental evidence that a significant reduction in neural heterogeneity accompanies epilepsy. We directly implement experimentally-obtained heterogeneity levels in cortical excitatory-inhibitory (E-I) stochastic spiking network models. Low heterogeneity networks display unique dynamics typified by a sudden transition into a hyper-active and synchronous state paralleling ictogenesis. Mean-field analysis reveals a distinct mathematical structure in these networks distinguished by multi-stability. Furthermore, the mathematically characterized linearizing effect of heterogeneity on input-output response functions explains the counter-intuitive experimentally observed reduction in single-cell excitability in epileptogenic neurons. This joint experimental, computational, and mathematical study showcases that decreased neuronal heterogeneity exists in epileptogenic human cortical tissue, that this difference yields dynamical changes in neural networks paralleling ictogenesis, and that there is a fundamental explanation for these dynamics based in mathematically characterized effects of heterogeneity. These interdisciplinary results provide convincing evidence that biophysical diversity imbues neural circuits with resilience to seizure and a new lens through which to view epilepsy, the most common serious neurological disorder in the world, that could reveal new targets for clinical treatment.

SeminarNeuroscienceRecording

NMC4 Keynote: A network perspective on cognitive effort

Dani Bassett
University of Pennsylvania
Nov 30, 2021

Cognitive effort has long been an important explanatory factor in the study of human behavior in health and disease. Yet, the biophysical nature of cognitive effort remains far from understood. In this talk, I will offer a network perspective on cognitive effort. I will begin by canvassing a recent perspective that casts cognitive effort in the framework of network control theory, developed and frequently used in systems engineering. The theory describes how much energy is required to move the brain from one activity state to another, when activity is constrained to pass along physical pathways in a connectome. I will then turn to empirical studies that link this theoretical notion of energy with cognitive effort in a behaviorally demanding task, and with a metabolic notion of energy as accessible to FDG-PET imaging. Finally, I will ask how this structurally-constrained activity flow can provide us with insights about the brain’s non-equilibrium nature. Using a general tool for quantifying entropy production in macroscopic systems, I will provide evidence to suggest that states of marked cognitive effort are also states of greater entropy production. Collectively, the work I discuss offers a complementary view of cognitive effort as a dynamical process occurring atop a complex network.

SeminarNeuroscienceRecording

Generative models of brain function: Inference, networks, and mechanisms

Adeel Razi
Monash University
Nov 25, 2021

This talk will focus on the generative modelling of resting state time series or endogenous neuronal activity. I will survey developments in modelling distributed neuronal fluctuations – spectral dynamic causal modelling (DCM) for functional MRI – and how this modelling rests upon functional connectivity. The dynamics of brain connectivity has recently attracted a lot of attention among brain mappers. I will also show a novel method to identify dynamic effective connectivity using spectral DCM. Further, I will summarise the development of the next generation of DCMs towards large-scale, whole-brain schemes which are computationally inexpensive, to the other extreme of the development using more sophisticated and biophysically detailed generative models based on the canonical microcircuits.

SeminarNeuroscienceRecording

Self-organized formation of discrete grid cell modules from smooth gradients

Sarthak Chandra
Fiete lab, MIT
Nov 2, 2021

Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.

SeminarNeuroscienceRecording

Neural dynamics of probabilistic information processing in humans and recurrent neural networks

Nuttida Rungratsameetaweemana
Sejnowski lab, The Salk Institute
Oct 5, 2021

In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.

SeminarNeuroscienceRecording

Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software

Stephanie Jones
Brown University
Sep 7, 2021

Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.

SeminarNeuroscienceRecording

Disinhibitory and neuromodulatory regulation of hippocampal synaptic plasticity

Inês Guerreiro
Gutkin lab, Ecole Normale Superieure
Jul 27, 2021

The CA1 pyramidal neurons are embedded in an intricate local circuitry that contains a variety of interneurons. The roles these interneurons play in the regulation of the excitatory synaptic plasticity remains largely understudied. Recent experiments showed that repeated cholinergic activation of 𝛼7 nACh receptors expressed in oriens-lacunosum-moleculare (OLM𝛼2) interneurons could induce LTP in SC-CA1 synapses. We used a biophysically realistic computational model to examine mechanistically how cholinergic activation of OLMa2 interneurons increases SC to CA1 transmission. Our results suggest that, when properly timed, activation of OLMa2 interneurons cancels the feedforward inhibition onto CA1 pyramidal cells by inhibiting fast-spiking interneurons that synapse on the same dendritic compartment as the SC, i.e., by disinhibiting the pyramidal cell dendritic compartment. Our work further describes the pairing of disinhibition with SC stimulation as a general mechanism for the induction of synaptic plasticity. We found that locally-reduced GABA release (disinhibition) paired with SC stimulation could lead to increased NMDAR activation and intracellular calcium concentration sufficient to upregulate AMPAR permeability and potentiate the excitatory synapse. Our work suggests that inhibitory synapses critically modulate excitatory neurotransmission and induction of plasticity at excitatory synapses. Our work also shows how cholinergic action on OLM interneurons, a mechanism whose disruption is associated with memory impairment, can down-regulate the GABAergic signaling into CA1 pyramidal cells and facilitate potentiation of the SC-CA1 synapse.

SeminarPhysics of LifeRecording

3D Printing Cellular Communities: Mammalian Cells, Bacteria, And Beyond

Tapomoy Bhattacharjee
Princeton University
Jun 20, 2021

While the motion and collective behavior of cells are well-studied on flat surfaces or in unconfined liquid media, in most natural settings, cells thrive in complex 3D environments. Bioprinting processes are capable of structuring cells in 3D and conventional bioprinting approaches address this challenge by embedding cells in bio-degradable polymer networks. However, heterogeneity in network structure and biodegradation often preclude quantitative studies of cell behavior in specified 3D architectures. Here, I will present a new approach to 3D bioprinting of cellular communities that utilizes jammed, granular polyelectrolyte microgels as a support medium. The self-healing nature of this medium allows the creation of highly precise cellular communities and tissue-like structures by direct injection of cells inside the 3D medium. Further, the transparent nature of this medium enables precise characterization of cellular behavior. I will describe two examples of my work using this platform to study the behavior of two different classes of cells in 3D. First, I will describe how we interrogate the growth, viability, and migration of mammalian cells—ranging from epithelial cells, cancer cells, and T cells—in the 3D pore space. Second, I will describe how we interrogate the migration of E. coli bacteria through the 3D pore space. Direct visualization enables us to reveal a new mode of motility exhibited by individual cells, in stark contrast to the paradigm of run-and-tumble motility, in which cells are intermittently and transiently trapped as they navigate the pore space; further, analysis of these dynamics enables prediction of single-cell transport over large length and time scales. Moreover, we show that concentrated populations of E. coli can collectively migrate through a porous medium—despite being strongly confined—by chemotactically “surfing” a self-generated nutrient gradient. Together, these studies highlight how the jammed microgel medium provides a powerful platform to design and interrogate complex cellular communities in 3D—with implications for tissue engineering, microtissue mechanics, studies of cellular interactions, and biophysical studies of active matter.

SeminarOpen SourceRecording

A macaque connectome for simulating large-scale network dynamics in The VirtualBrain

Kelly Shen
University of Toronto
Apr 29, 2021

TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.

SeminarNeuroscience

Synchrony and Synaptic Signaling in Cerebellar Circuits

Indira Raman
Northwestern University
Apr 29, 2021

The cerebellum permits a wide range of behaviors that involve sensorimotor integration. We have been investigating how specific cellular and synaptic specializations of cerebellar neurons measured in vitro, give rise to circuit activity in vivo. We have investigated these issues by studying Purkinje neurons as well as the large neurons of the mouse cerebellar nuclei, which form the major excitatory premotor projection from the cerebellum. Large CbN cells have ion channels that favor spontaneous action potential firing and GABAA receptors that generate ultra-fast inhibitory synaptic currents, raising the possibility that these biophysical attributes may permit CbN cells to respond differently to the degree of temporal coherence of their Purkinje cell inputs. In vivo, self-initiated motor programs associated with whisking correlates with asynchronous changes in Purkinje cell simple spiking that are asynchronous across the population. The resulting inhibition converges with mossy fiber excitation to yield little change in CbN cell firing, such that cerebellar output is low or cancelled. In contrast, externally applied sensory stimuli elicits a transient, synchronous inhibition of Purkinje cell simple spiking. During the resulting strong disinhibition of CbN cells, sensory-induced excitation from mossy fibers effectively drives cerebellar outputs that increase the magnitude of reflexive whisking. Purkinje cell synchrony, therefore, may be a key variable contributing to the “positive effort” hypothesized by David Marr in 1969 to be necessary for cerebellar control of movement.

SeminarPhysics of Life

“Biophysics of Structural Plasticity in Postsynaptic Spines”

Padmini Rangamani
University of California, San Diego
Oct 26, 2020

The ability of the brain to encode and store information depends on the plastic nature of the individual synapses. The increase and decrease in synaptic strength, mediated through the structural plasticity of the spine, are important for learning, memory, and cognitive function. Dendritic spines are small structures that contain the synapse. They come in a variety of shapes (stubby, thin, or mushroom-shaped) and a wide range of sizes that protrude from the dendrite. These spines are the regions where the postsynaptic biochemical machinery responds to the neurotransmitters. Spines are dynamic structures, changing in size, shape, and number during development and aging. While spines and synapses have inspired neuromorphic engineering, the biophysical events underlying synaptic and structural plasticity of single spines remain poorly understood. Our current focus is on understanding the biophysical events underlying structural plasticity. I will discuss recent efforts from my group — first, a systems biology approach to construct a mathematical model of biochemical signaling and actin-mediated transient spine expansion in response to calcium influx caused by NMDA receptor activation and a series of spatial models to study the role of spine geometry and organelle location within the spine for calcium and cyclic AMP signaling. Second, I will discuss how mechanics of membrane-cytoskeleton interactions can give insight into spine shape region. And I will conclude with some new efforts in using reconstructions from electron microscopy to inform computational domains. I will conclude with how geometry and mechanics plays an important role in our understanding of fundamental biological phenomena and some general ideas on bio-inspired engineering.

SeminarNeuroscience

K+ Channel Gain of Function in Epilepsy, from Currents to Networks

Matthew Weston
University of Vermont
Oct 20, 2020

Recent human gene discovery efforts show that gain-of-function (GOF) variants in the KCNT1gene, which encodes a Na+-activated K+ channel subunit, cause severe epilepsies and other neurodevelopmental disorders. Although the impact of these variants on the biophysical properties of the channels is well characterized, the mechanisms that link channel dysfunction to cellular and network hyperexcitability and human disease are unknown. Furthermore, precision therapies that correct channel biophysics in non-neuronal cells have had limited success in treating human disease, highlighting the need for a deeper understanding of how these variants affect neurons and networks. To address this gap, we developed a new mouse model with a pathogenic human variant knocked into the mouse Kcnt1gene. I will discuss our findings on the in vivo phenotypes of this mouse, focusing on our characterization of epileptiform neural activity using electrophysiology and widefield Ca++imaging. I will also talk about our investigations at the synaptic, cellular, and circuit levels, including the main finding that cortical inhibitory neurons in this model show a reduction in intrinsic excitability and action potential generation. Finally, I will discuss future directions to better understand the mechanisms underlying the cell-type specific effects, as well as the link between the cellular and network level effects of KCNT1 GOF.

SeminarNeuroscience

Towards multipurpose biophysics-based mathematical models of cortical circuits

Gaute Einevoll
Norwegian University of Life Sciences
Oct 13, 2020

Starting with the work of Hodgkin and Huxley in the 1950s, we now have a fairly good understanding of how the spiking activity of neurons can be modelled mathematically. For cortical circuits the understanding is much more limited. Most network studies have considered stylized models with a single or a handful of neuronal populations consisting of identical neurons with statistically identical connection properties. However, real cortical networks have heterogeneous neural populations and much more structured synaptic connections. Unlike typical simplified cortical network models, real networks are also “multipurpose” in that they perform multiple functions. Historically the lack of computational resources has hampered the mathematical exploration of cortical networks. With the advent of modern supercomputers, however, simulations of networks comprising hundreds of thousands biologically detailed neurons are becoming feasible (Einevoll et al, Neuron, 2019). Further, a large-scale biologically network model of the mouse primary visual cortex comprising 230.000 neurons has recently been developed at the Allen Institute for Brain Science (Billeh et al, Neuron, 2020). Using this model as a starting point, I will discuss how we can move towards multipurpose models that incorporate the true biological complexity of cortical circuits and faithfully reproduce multiple experimental observables such as spiking activity, local field potentials or two-photon calcium imaging signals. Further, I will discuss how such validated comprehensive network models can be used to gain insights into the functioning of cortical circuits.

SeminarNeuroscienceRecording

Sensing Light for Sight and Physiological Control

Michael Tri Do
Harvard Medical School and Boston Children's Hospital
Aug 10, 2020

Organisms sense light for purposes that range from recognizing objects to synchronizing activity with environmental cycles. What mechanisms serve these diverse tasks? This seminar will examine the specializations of two cell types. First are the foveal cone photoreceptors. These neurons are used by primates to see far greater detail than other mammals, which lack them. How do the biophysical properties of foveal cones support high-acuity vision? Second are the melanopsin retinal ganglion cells, which are conserved among mammals and essential for processes that include regulation of the circadian clock, sleep, and hormone levels. How do these neurons encode light, and is encoding customized for animals of different niches? In pursuing these questions, a broad goal is to learn how various levels of biological organization are shaped to behavioural needs.

SeminarPhysics of Life

Untitled Seminar

Multiple Speakers
Multiple
Jul 30, 2020

The symposium provides an opportunity for ECRs working in biophysical research to get together and to share their research. Although the symposium is primarily aimed at ECRs, we welcome everyone with an interest in biophysical sciences to join in the lively discussions and questions. This half day symposium will feature short talks and flash-talks from a range of ECRs around the biophysics theme. Afterwards there will be a virtual poster session with open discussions. We warmly invite both domestic and international ECRs to present at/attend this event.

SeminarPhysics of Life

Dynamics of microbiota communities during physical perturbation

Carolina Tropini
UBC – Vancouver BC – Canada
Jul 28, 2020

The consortium of microbes living in and on our bodies is intimately connected with human biology and deeply influenced by physical forces. Despite incredible gains in describing this community, and emerging knowledge of the mechanisms linking it to human health, understanding the basic physical properties and responses of this ecosystem has been comparatively neglected. Most diseases have significant physical effects on the gut; diarrhea alters osmolality, fever and cancer increase temperature, and bowel diseases affect pH. Furthermore, the gut itself is comprised of localized niches that differ significantly in their physical environment, and are inhabited by different commensal microbes. Understanding the impact of common physical factors is necessary for engineering robust microbiota members and communities; however, our knowledge of how they affect the gut ecosystem is poor. We are investigating how changes in osmolality affect the host and the microbial community and lead to mechanical shifts in the cellular environment. Osmotic perturbation is extremely prevalent in humans, caused by the use of laxatives, lactose intolerance, or celiac disease. In our studies we monitored osmotic shock to the microbiota using a comprehensive and novel approach, which combined in vivo experiments to imaging, physical measurements, computational analysis and highly controlled microfluidic experiments. By bridging several disciplines, we developed a mechanistic understanding of the processes involved in osmotic diarrhea, linking single-cell biophysical changes to large-scale community dynamics. Our results indicate that physical perturbations can profoundly and permanently change the competitive and ecological landscape of the gut, and affect the cell wall of bacteria differentially, depending on their mechanical characteristics.

SeminarPhysics of LifeRecording

Chromatin transcription: cryo-EM structures of Pol II-nucleosome and nucleosome-CHD complexes

Lucas Farnung
Max Planck Institute for Biophysical Chemistry
Jul 28, 2020
SeminarNeuroscience

Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus

Andrea Navas-Olive
Instituto Cajal CSIC
Jul 18, 2020

The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates. " "The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown." "Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.

SeminarNeuroscience

How the brain comes to balance: Development of postural stability and its neural architecture in larval zebrafish

David Schoppik
New York University Grossman School of Medicine
Jul 1, 2020

Maintaining posture is a vital challenge for all freely-moving organisms. As animals grow, their relationship to destabilizing physical forces changes. How does the nervous system deal with this ongoing challenge? Vertebrates use highly conserved vestibular reflexes to stabilize the body. We established the larval zebrafish as a new model system to understand the development of the vestibular reflexes responsible for balance. In this talk, I will begin with the biophysical challenges facing baby fish as they learn to swim. I’ll briefly review published work by David Ehrlich, Ph.D., establishing a fundamental relationship between postural stability and locomotion. The bulk of the talk will highlight unpublished work by Kyla Hamling. She discovered that a small (~50) population of molecularly-defined brainstem neurons called vestibulo-spinal cells act as a nexus for postural development. Her loss-of-function experiments show that these neurons contribute more to postural stability as animals grow older. I’ll end with brief highlights from her ongoing work examining tilt-evoked responses of these neurons using 2-photon imaging and the consequences of downstream activity in the spinal cord using single-objective light-sheet (SCAPE) microscopy

SeminarNeuroscienceRecording

Mean-field models for finite-size populations of spiking neurons

Tilo Schwalger
TU Berlin
Jun 7, 2020

Firing-rate (FR) or neural-mass models are widely used for studying computations performed by neural populations. Despite their success, classical firing-rate models do not capture spike timing effects on the microscopic level such as spike synchronization and are difficult to link to spiking data in experimental recordings. For large neuronal populations, the gap between the spiking neuron dynamics on the microscopic level and coarse-grained FR models on the population level can be bridged by mean-field theory formally valid for infinitely many neurons. It remains however challenging to extend the resulting mean-field models to finite-size populations with biologically realistic neuron numbers per cell type (mesoscopic scale). In this talk, I present a mathematical framework for mesoscopic populations of generalized integrate-and-fire neuron models that accounts for fluctuations caused by the finite number of neurons. To this end, I will introduce the refractory density method for quasi-renewal processes and show how this method can be generalized to finite-size populations. To demonstrate the flexibility of this approach, I will show how synaptic short-term plasticity can be incorporated in the mesoscopic mean-field framework. On the other hand, the framework permits a systematic reduction to low-dimensional FR equations using the eigenfunction method. Our modeling framework enables a re-examination of classical FR models in computational neuroscience under biophysically more realistic conditions.

SeminarNeuroscienceRecording

Spanning the arc between optimality theories and data

Gasper Tkacik
Institute of Science and Technology Austria
Jun 1, 2020

Ideas about optimization are at the core of how we approach biological complexity. Quantitative predictions about biological systems have been successfully derived from first principles in the context of efficient coding, metabolic and transport networks, evolution, reinforcement learning, and decision making, by postulating that a system has evolved to optimize some utility function under biophysical constraints. Yet as normative theories become increasingly high-dimensional and optimal solutions stop being unique, it gets progressively hard to judge whether theoretical predictions are consistent with, or "close to", data. I will illustrate these issues using efficient coding applied to simple neuronal models as well as to a complex and realistic biochemical reaction network. As a solution, we developed a statistical framework which smoothly interpolates between ab initio optimality predictions and Bayesian parameter inference from data, while also permitting statistically rigorous tests of optimality hypotheses.

SeminarNeuroscience

Algorithms and circuits for olfactory navigation in walking Drosophila

Katherine Nagel
New York University
May 5, 2020

Olfactory navigation provides a tractable model for studying the circuit basis of sensori-motor transformations and goal-directed behaviour. Macroscopic organisms typically navigate in odor plumes that provide a noisy and uncertain signal about the location of an odor source. Work in many species has suggested that animals accomplish this task by combining temporal processing of dynamic odor information with an estimate of wind direction. Our lab has been using adult walking Drosophila to understand both the computational algorithms and the neural circuits that support navigation in a plume of attractive food odor. We developed a high-throughput paradigm to study behavioural responses to temporally-controlled odor and wind stimuli. Using this paradigm we found that flies respond to a food odor (apple cider vinegar) with two behaviours: during the odor they run upwind, while after odor loss they perform a local search. A simple computational model based one these two responses is sufficient to replicate many aspects of fly behaviour in a natural turbulent plume. In on-going work, we are seeking to identify the neural circuits and biophysical mechanisms that perform the computations delineated by our model. Using electrophysiology, we have identified mechanosensory neurons that compute wind direction from movements of the two antennae and central mechanosensory neurons that encode wind direction are are involved in generating a stable downwind orientation. Using optogenetic activation, we have traced olfactory circuits capable of evoking upwind orientation and offset search from the periphery, through the mushroom body and lateral horn, to the central complex. Finally, we have used optogenetic activation, in combination with molecular manipulation of specific synapses, to localize temporal computations performed on the odor signal to olfactory transduction and transmission at specific synapses. Our work illustrates how the tools available in fruit fly can be applied to dissect the mechanisms underlying a complex goal-directed behaviour.

SeminarNeuroscienceRecording

Diverse synaptic mechanisms underlie visual signaling in the retina

Jeffrey Diamond
NIH Bethesda
Apr 23, 2020

Our laboratory seeks to understand how neural circuits receive, compute, encode and transmit information. More specifically, we’d like to learn what biophysical and morphological features equip synapses, neurons and networks to perform these tasks. The retina is a model system for the study of neuronal information processing: We can deliver precisely defined physiological stimuli and record responses from many different cell types at various points within the network; in addition, retinal circuitry is particularly well understood, enabling us to interpret more directly the impact of synaptic and cellular mechanisms on circuit function. I will present recent experiments in the lab that exploit these advantages to examine how synapses and neurons within retinal amacrine cell circuits perform specific visual computations.

ePoster

Adolescent maturation of cortical excitation-inhibition balance based on individualized biophysical network modeling

Amin Saberi, Kevin Wischnewski, Kyesam Jung, Leon Lotter, H. Schaare, Tobias Banaschweski, Gareth Barker, Arun Bokde, Sylvane Desrivières, Herta Flor, Antoine Grigis, Hugh Garavan, Penny Gowland, Andreas Heinz, Rüdiger Brühl, Jean-Luc Martinot, Marie-Laure Paillère Martinot, Eric Artiges, Frauke Nees, Dimitri Papadopoulos Orfanos, Herve Lemaitre, Luise Poustka, Sarah Hohmann, Nathalie Holz, Christian Baeuchl, Michael Smolka, Nilakshi Vaidya, Henrik Walter, Robert Whelan, Gunther Schumann, Tomas Paus, Juergen Dukart, Boris Bernhardt, Oleksandr Popovych, Simon Eickhoff, Sofie Valk

Bernstein Conference 2024

ePoster

cuBNM: GPU-Accelerated Biophysical Network Modeling

Amin Saberi, Kevin Wischnewski, Kyesam Jung, Leonard Sasse, Felix Hoffstaedter, Oleksandr Popovych, Boris Bernhardt, Simon Eickhoff, Sofie Valk

Bernstein Conference 2024

ePoster

Sequence learning under biophysical constraints: a re-evaluation of prominent models

Barna Zajzon, Younes Bouhadjar, Tom Tetzlaff, Renato Duarte, Abigail Morrison

Bernstein Conference 2024

ePoster

Toward a biophysically-detailed, fully-differentiable model of the mouse retina

Kyra Kadhim, Ziwei Huang, Michael Deistler, Jonas Beck, Thomas Euler, Jakob Macke, Philipp Berens

Bernstein Conference 2024

ePoster

A biophysically detailed model of retinal degeneration

COSYNE 2022

ePoster

A biophysical account of multiplication by a single neuron

COSYNE 2022

ePoster

A biophysical counting mechanism for keeping time

COSYNE 2022

ePoster

A Biophysical Mechanism for Changing the Threat Sensitivity of Escape Behavior

Yaara Lefler, Yeqing Wang, Goncalo Ferreira, Tiago Branco

COSYNE 2023

ePoster

A biophysically detailed model of retinal degeneration

Aiwen Xu & Michael Beyeler

COSYNE 2023

ePoster

The thermal adjustment used in neuronal biophysical models is wrong: Here is how to fix it

Bahram Pahlavan, Nicolas Buitrago, Fidel Santamaria

COSYNE 2023

ePoster

A mechanism for selective attention in biophysically realistic Daleian spiking neural networks

Martin Vinck, Marius Schneider

COSYNE 2025

ePoster

Predictive coding in a biophysically detailed Continuous attractor model of grid cells

Inayath Shaikh, Collins Assisi

COSYNE 2025

ePoster

The biophysical mechanism underlying epigenetically inherited stress response/unpredictability learning

Alaa Saleh, Barkai Edi, Gaisler-Salomon Inna

FENS Forum 2024

ePoster

Biophysical basis of ultrafast population encoding

Andreas Neef, Konstantin Möller

FENS Forum 2024

ePoster

A biophysical mechanism for changing the threat sensitivity of escape behaviour

Yaara Lefler, Yeqing Wang, Goncalo Ferreira, Tiago Branco

FENS Forum 2024

ePoster

Biophysically detailed cortical neuron models with genetically-defined ion channels

Darshan Mandge, Yann Roussel, Stijn van Dorp, Tanguy Damart, Aurélien Jaquier, Henry Markram, Daniel Keller, Lida Kanari, Werner Van Geit, Rajnish Ranjan

FENS Forum 2024

ePoster

Comparative analysis of biophysical properties of ON-alpha sustained RGCs in wild-type and rd10 retina

Viktoria Kiraly, Molis Yunzab, Francisco Nadal-Nicolas, Steven Stasheff, Shelley Fried, Günther Zeck, Paul Werginz

FENS Forum 2024

ePoster

Controlling morpho-electrophysiological variability of neurons with detailed biophysical models

Alexis Arnaudon, Maria Reva, Mickael Zbili, Henry Markarm, Werner Van Geit, Lida Kanari

FENS Forum 2024

ePoster

Estimation of neuronal biophysical parameters in the presence of experimental noise using computer simulations and probabilistic inference methods

Dániel Terbe, Balázs Szabó, Szabolcs Káli

FENS Forum 2024

ePoster

Exploring biophysical and biochemical mechanisms of neuron-astrocyte network models

Tiina Manninen, Jugoslava Aćimović, Marja-Leena Linne

FENS Forum 2024

ePoster

Impact of inter-areal connectivity on sensory processing in a biophysically-detailed model of two interacting cortical areas

Sirio Bolaños Puchet, András Ecker, Daniela Egas Santander, James B. Isbister, Christoph Pokorny, Michael W. Reimann

FENS Forum 2024

ePoster

Intrinsic biophysical properties and extrinsic spatial experience collaboratively prime CA1 pyramidal cells to replay during sharp-wave ripples

Xiaomin Zhang, Jules Auguste Lubetzki, Peter Jonas, Fritjof Helmchen

FENS Forum 2024

ePoster

Signal integration and competition in a biophysical model of the substantia nigra pars reticulata

William Scott Thompson, J. J. Johannes Hjorth, Alex Kozlov, Gilad Silberberg, Jeanette Hellgren Kotaleski, Sten Grillner

FENS Forum 2024