Neuronal Networks
neuronal networks
Arvind Kumar
Postdoctoral researcher positions are available in computational neuroscience. The projects will entail modelling of biological neural networks, either reduced rate-models or data-driven biophysical models or analysis of neural data. Each selected candidate will work in close collaboration with other PIs in the dBrain consortium. dBRAIN is an interdisciplinary initiative to better understand neurodegenerative diseases such as Parkinson’s disease and Alzheimer’s disease. We combine computational modeling, machine learning and topological data analysis to identify causal links among disease biomarkers and disease symptoms. This understanding should improve diagnosis, prediction of the disease progression and suggest better therapies. There are 3 positions available and the selected candidates with work with Arvind Kumar [https://www.kth.se/profile/arvindku?l=en] Jeanette Hellgren Kotaleski [https://www.kth.se/profile/jeanette?l=en] Erik Fransen [https://www.kth.se/profile/erikf?l=en] Apply: https://www.kth.se/en/om/work-at-kth/lediga-jobb/what:job/jobID:390546/where:4/
More than a beast growing in a passive brain: excitation and inhibition drive epilepsy and glioma progression
Gliomas are brain tumors formed by networks of connected tumor cells, nested in and interacting with neuronal networks. Neuronal activities interfere with tumor growth and occurrence of seizures affects glioma prognosis, while the developing tumor triggers seizures in the infiltrated cortex. Oncometabolites produced by tumor cells and neurotransmitters affect both the generation of epileptic activities by neurons and the growth of glioma cells through synaptic-related mechanisms, involving both GABAergic / Chloride pathways and glutamatergic signaling. From a clinical sight, epilepsy occurrence is beneficial to glioma prognosis but growing tumors are epileptogenic, which constitutes a paradox. This lecture will review how inhibitory and excitatory signaling drives glioma growth and how epileptic and oncological processes are interfering, with a special focus on the human brain.
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
Molecular Logic of Synapse Organization and Plasticity
Connections between nerve cells called synapses are the fundamental units of communication and information processing in the brain. The accurate wiring of neurons through synapses into neural networks or circuits is essential for brain organization. Neuronal networks are sculpted and refined throughout life by constant adjustment of the strength of synaptic communication by neuronal activity, a process known as synaptic plasticity. Deficits in the development or plasticity of synapses underlie various neuropsychiatric disorders, including autism, schizophrenia and intellectual disability. The Siddiqui lab research program comprises three major themes. One, to assess how biochemical switches control the activity of synapse organizing proteins, how these switches act through their binding partners and how these processes are regulated to correct impaired synaptic function in disease. Two, to investigate how synapse organizers regulate the specificity of neuronal circuit development and how defined circuits contribute to cognition and behaviour. Three, to address how synapses are formed in the developing brain and maintained in the mature brain and how microcircuits formed by synapses are refined to fine-tune information processing in the brain. Together, these studies have generated fundamental new knowledge about neuronal circuit development and plasticity and enabled us to identify targets for therapeutic intervention.
Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems
Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.
Astroglial modulation of the antidepressant action of deep brain and bright light stimulation
Even if major depression is now the most common of psychiatric disorders, successful antidepressant treatments are still difficult to achieve. Therefore, a better understanding of the mechanisms of action of current antidepressant treatments is needed to ultimately identify new targets and enhance beneficial effects. Given the intimate relationships between astrocytes and neurons at synapses and the ability of astrocytes to "sense" neuronal communication and release gliotransmitters, an attractive hypothesis is emerging stating that the effects of antidepressants on brain function could be, at least in part, modulated by direct influences of astrocytes on neuronal networks. We will present two preclinical studies revealing a permissive role of glia in the antidepressant response: i) Control of the antidepressant-like effects of rat prefrontal cortex Deep Brain Stimulation (DBS) by astroglia, ii) Modulation of antidepressant efficacy of Bright Light Stimulation (BLS) by lateral habenula astroglia. Therefore, it is proposed that an unaltered neuronal-glial system constitutes a major prerequisite to optimize antidepressant efficacy of DBS or BLS. Collectively, these results pave also the way to the development of safer and more effective antidepressant strategies.
2nd In-Vitro 2D & 3D Neuronal Networks Summit
The event is open to everyone interested in Neuroscience, Cell Biology, Drug Discovery, Disease Modeling, and Bio/Neuroengineering! This meeting is a platform bringing scientists from all over the world together and fostering scientific exchange and collaboration.
2nd In-Vitro 2D & 3D Neuronal Networks Summit
The event is open to everyone interested in Neuroscience, Cell Biology, Drug Discovery, Disease Modeling, and Bio/Neuroengineering! This meeting is a platform bringing scientists from all over the world together and fostering scientific exchange and collaboration.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Structure, Function, and Learning in Distributed Neuronal Networks
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.
Networking—the key to success… especially in the brain
In our everyday lives, we form connections and build up social networks that allow us to function successfully as individuals and as a society. Our social networks tend to include well-connected individuals who link us to other groups of people that we might otherwise have limited access to. In addition, we are more likely to befriend individuals who a) live nearby and b) have mutual friends. Interestingly, neurons tend to do the same…until development is perturbed. Just like social networks, neuronal networks require highly connected hubs to elicit efficient communication at minimal cost (you can’t befriend everybody you meet, nor can every neuron wire with every other!). This talk will cover some of Alex’s work showing that microscopic (cellular scale) brain networks inferred from spontaneous activity show similar complex topology to that previously described in macroscopic human brain scans. The talk will also discuss what happens when neurodevelopment is disrupted in the case of a monogenic disorder called Rett Syndrome. This will include simulations of neuronal activity and the effects of manipulation of model parameters as well as what happens when we manipulate real developing networks using optogenetics. If functional development can be restored in atypical networks, this may have implications for treatment of neurodevelopmental disorders like Rett Syndrome.
Gap Junction Coupling between Photoreceptors
Simply put, the goal of my research is to describe the neuronal circuitry of the retina. The organization of the mammalian retina is certainly complex but it is not chaotic. Although there are many cell types, most adhere to a relatively constant morphology and they are distributed in non-random mosaics. Furthermore, each cell type ramifies at a characteristic depth in the retina and makes a stereotyped set of synaptic connections. In other words, these neurons form a series of local circuits across the retina. The next step is to identify the simplest and commonest of these repeating neural circuits. They are the building blocks of retinal function. If we think of it in this way, the retina is a fabulous model for the rest of the CNS. We are interested in identifying specific circuits and cell types that support the different functions of the retina. For example, there appear to be specific pathways for rod and cone mediated vision. Rods are used under low light conditions and rod circuitry is specialized for high sensitivity when photons are scarce (when you’re out camping, starlight). The hallmark of the rod-mediated system is monochromatic vision. In contrast, the cone circuits are specialized for high acuity and color vision under relatively bright or daylight conditions. Individual neurons may be filled with fluorescent dyes under visual control. This is achieved by impaling the cell with a glass microelectrode using a 3D micromanipulator. We are also interested in the diffusion of dye through coupled neuronal networks in the retina. The dye filled cells are also combined with antibody labeling to reveal neuronal connections and circuits. This triple-labeled material may be viewed and reconstructed in 3 dimensions by multi-channel confocal microscopy. We have our own confocal microscope facility in the department and timeslots are available to students in my lab.
Acetylcholine modulation of short-term plasticity is critical to reliable long-term plasticity in hippocampal synapses
CA3-CA1 synapses in the hippocampus are the initial locus of episodic memory. The action of acetylcholine alters cellular excitability, modifies neuronal networks, and triggers secondary signaling that directly affects long-term plasticity (LTP) (the cellular underpinning of memory). It is therefore considered a critical regulator of learning and memory in the brain. Its action via M4 metabotropic receptors in the presynaptic terminal of the CA3 neurons and M1 metabotropic receptors in the postsynaptic spines of CA1 neurons produce rich dynamics across multiple timescales. We developed a model to describe the activation of postsynaptic M1 receptors that leads to IP3 production from membrane PIP2 molecules. The binding of IP3 to IP3 receptors in the endoplasmic reticulum (ER) ultimately causes calcium release. This calcium release from the ER activates potassium channels like the calcium-activated SK channels and alters different aspects of synaptic signaling. In an independent signaling cascade, M1 receptors also directly suppress SK channels and the voltage-activated KCNQ2/3 channels, enhancing post-synaptic excitability. In the CA3 presynaptic terminal, we model the reduction of the voltage sensitivity of voltage-gated calcium channels (VGCCs) and the resulting suppression of neurotransmitter release by the action of the M4 receptors. Our results show that the reduced initial release probability because of acetylcholine alters short-term plasticity (STP) dynamics. We characterize the dichotomy of suppressing neurotransmitter release from CA3 neurons and the enhanced excitability of the postsynaptic CA1 spine. Mechanisms underlying STP operate over a few seconds, while those responsible for LTP last for hours, and both forms of plasticity have been linked with very distinct functions in the brain. We show that the concurrent suppression of neurotransmitter release and increased sensitivity conserves neurotransmitter vesicles and enhances the reliability in plasticity. Our work establishes a relationship between STP and LTP coordinated by neuromodulation with acetylcholine.
Robust Encoding of Abstract Rules by Distinct Neuronal Populations in Primate Visual Cortex
I will discuss our recent evidence showing that information about abstract rules can be decoded from neuronal activity in primate visual cortex even in the absence of sensory stimulation. Furthermore, that rule information is greatest among neurons with the least visual activity and the weakest coupling to local neuronal networks. In addition, I will talk about recent developments in large-scale neurophysiological techniques in nonhuman primates.
Myelination: another form of brain plasticity
Studies of neural circuit plasticity focus almost exclusively on functional and structural changes of neuronal synapses. In recent years, however, myelin plasticity has emerged as a potential modulator of neuronal networks. Myelination of previously unmyelinated axons and changes in the structure on already-myelinated axons can have large effects on the function of neuronal networks. Yet myelination has been mostly studied in relation to its functional and metabolic activity. Myelin modifications are increasingly being implicated as a mechanism for sensory-motor learning and unpublished data from our lab indicate that myelination also occurs during cognitive non-motor learning. It is, however, unclear how specific these myelin changes are and even less is known of the underlying mechanisms of learning-evoked myelin plasticity. In this journal club, Dr Giulia Bonetto will provide a general overview on myelin plasticity. Additionally, she will present new data addressing the role of myelin plasticity in cognitive non-motor learning.
Sex-Specific Brain Transcriptional Signatures in Human MDD and their Correlates in Mouse Models of Depression
Major depressive disorder (MDD) is a sexually dimorphic disease. This sexual dimorphism is believed to result from sex-specific molecular alterations affecting functional pathways regulating the capacity of men and women to cope with daily life stress differently. Transcriptional changes associated with epigenetic alterations have been observed in the brain of men and women with depression and similar changes have been reported in different animal models of stress-induced depressive-like behaviors. In fact, most of our knowledge of the biological basis of MDD is derived from studies of chronic stress models in rodents. However, while these models capture certain aspects of the features of MDD, the extent to which they reproduce the molecular pathology of the human syndrome remains unknown and the functional consequences of these changes on the neuronal networks controlling stress responses are poorly understood. During this presentation, we will first address the extent by which transcriptional signatures associated with MDD compares in men and women. We will then transition to the capacity of different mouse models of chronic stress to recapitulate some of the transcriptional alterations associated with the expression of MDD in both sexes. Finally, we will briefly elaborate on the functional consequences of these changes at the neuronal level and conclude with an integrative perspective on the contribution of sex-specific transcriptional profiles on the expression of stress responses and MDD in men and women.
Correlations, chaos, and criticality in neural networks
The remarkable properties of information-processing of biological and of artificial neuronal networks alike arise from the interaction of large numbers of neurons. A central quest is thus to characterize their collective states. The directed coupling between pairs of neurons and their continuous dissipation of energy, moreover, cause dynamics of neuronal networks outside thermodynamic equilibrium. Tools from non-equilibrium statistical mechanics and field theory are thus instrumental to obtain a quantitative understanding. We here present progress with this recent approach [1]. On the experimental side, we show how correlations between pairs of neurons are informative on the dynamics of cortical networks: they are poised near a transition to chaos [2]. Close to this transition, we find prolongued sequential memory for past signals [3]. In the chaotic regime, networks offer representations of information whose dimensionality expands with time. We show how this mechanism aids classification performance [4]. Together these works illustrate the fruitful interplay between theoretical physics, neuronal networks, and neural information processing.
Bridging scales – combining functional ultrasound imaging, optogenetics, and electrophysiology to study neuronal networks underlying behavior
The interaction of sensory and motor information to shape neuronal representations in mouse cortical networks
The neurons in our brain never function in isolation; they are organized into complex circuits which perform highly specialized information processing tasks and transfer information through large neuronal networks. The aim of Janelle Pakan's research group is to better understand how neural circuits function during the transformation of information from sensory perception to behavioural output. Importantly, they also aim to further understand the cell-type specific processes that interrupt the flow of information through neural circuits in neurodegenerative disorders with dementia. The Pakan group utilizes innovative neuroanatomical tracing techniques, advanced in vivo two-photon imaging, and genetically targeted manipulations of neuronal activity to investigate the cell-type specific microcircuitry of the cerebral cortex, the macrocircuitry of cortical output to subcortical structures, and the functional circuitry underlying processes of sensory perception and motor behaviour.
Fast and deep neuromorphic learning with time-to-first-spike coding
Engineered pattern-recognition systems strive for short time-to-solution and low energy-to-solution characteristics. This represents one of the main driving forces behind the development of neuromorphic devices. For both them and their biological archetypes, this corresponds to using as few spikes as early as possible. The concept of few and early spikes is used as the founding principle in the time-to-first-spike coding scheme. Within this framework, we have developed a spike-timing-based learning algorithm, which we used to train neuronal networks on the mixed-signal neuromorphic platform BrainScaleS-2. We derive, from first principles, error-backpropagation-based learning in networks of leaky integrate-and-fire (LIF) neurons relying only on spike times, for specific configurations of neuronal and synaptic time constants. We explicitly examine applicability to neuromorphic substrates by studying the effects of reduced weight precision and range, as well as of parameter noise. We demonstrate the feasibility of our approach on continuous and discrete data spaces, both in software simulations and on BrainScaleS-2. This narrows the gap between previous models of first-spike-time learning and biological neuronal dynamics and paves the way for fast and energy-efficient neuromorphic applications.
Computing in neuronal networks with plasticity via all-optical bidirectional interfacing
Bernstein Conference 2024
Local E/I Balance and Spontaneous Dynamics in Neuronal Networks
Bernstein Conference 2024
The role of gap junctions and clustered connectivity in emergent synchronisation patterns of spiking inhibitory neuronal networks
Bernstein Conference 2024
Achieving the three logic operations using engineered biological neuronal networks
FENS Forum 2024
Computing in neuronal networks with plasticity by all-optical interfacing
FENS Forum 2024
Dynamical complexity in engineered biological neuronal networks with directional and modular connections
FENS Forum 2024
Effect of optogenetic modulation of neuronal ensembles in cultured neuronal networks
FENS Forum 2024
Exploring the role of the primary cilium in homeostatic plasticity in hiPSC-derived neuronal networks
FENS Forum 2024
Exploring synergistic supraspinal and sensory effects on adaptive plasticity of the neuronal networks after spinal cord injury
FENS Forum 2024
Forgetting slows down reversal learning in behaving mice and artificial neuronal networks
FENS Forum 2024
Identification of kainic acid-mediated concentration-dependent responses on human cortical neuronal networks in vitro
FENS Forum 2024
Integrating network activity with transcriptomic profiling in hiPSCs-derived neuronal networks to understand the molecular drivers of functional heterogeneity in the context of neurodevelopmental disorders
FENS Forum 2024
Investigating information transfer at the single-cell level using ultra low-density neuronal networks
FENS Forum 2024
Microglia modulate complex neuronal networks in acute brain slices despite their rapid, ATP-related phenotypic transformation
FENS Forum 2024
Optimizing electrical stimulation parameters for human-derived neuronal networks: An investigation into the reliability of evoked responses
FENS Forum 2024
Plasticity in iPSC-derived 2D cortical neuronal networks
FENS Forum 2024
Reservoir computing using cultured neuronal networks with modular topology
FENS Forum 2024
Time and effect of drugs diffusion in neuronal networks derived from human induced pluripotent stem cells
FENS Forum 2024
Unraveling mTORopathies: mTOR hyperactivation induces mutation-specific functional phenotypes in human neuronal networks
FENS Forum 2024