Latest

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 10, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

SeminarNeuroscience

Organization of thalamic networks and mechanisms of dysfunction in schizophrenia and autism

Vasileios Zikopoulos
Boston University
Nov 3, 2025

Thalamic networks, at the core of thalamocortical and thalamosubcortical communications, underlie processes of perception, attention, memory, emotions, and the sleep-wake cycle, and are disrupted in mental disorders, including schizophrenia and autism. However, the underlying mechanisms of pathology are unknown. I will present novel evidence on key organizational principles, structural, and molecular features of thalamocortical networks, as well as critical thalamic pathway interactions that are likely affected in disorders. This data can facilitate modeling typical and abnormal brain function and can provide the foundation to understand heterogeneous disruption of these networks in sleep disorders, attention deficits, and cognitive and affective impairments in schizophrenia and autism, with important implications for the design of targeted therapeutic interventions

SeminarNeuroscience

Gene regulation networks in nervous system cancers: identification of novel drug targets

Politis Panagiotis
Center for Basic Research, Biomedical Research Foundation of the Academy of Athens
Jun 20, 2025
SeminarNeuroscience

From Spiking Predictive Coding to Learning Abstract Object Representation

Prof. Jochen Triesch
Frankfurt Institute for Advanced Studies
Jun 12, 2025

In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.

SeminarNeuroscienceRecording

Functional Plasticity in the Language Network – evidence from Neuroimaging and Neurostimulation

Gesa Hartwigsen
University of Leipzig, Germany
May 20, 2025

Efficient cognition requires flexible interactions between distributed neural networks in the human brain. These networks adapt to challenges by flexibly recruiting different regions and connections. In this talk, I will discuss how we study functional network plasticity and reorganization with combined neurostimulation and neuroimaging across the adult life span. I will argue that short-term plasticity enables flexible adaptation to challenges, via functional reorganization. My key hypothesis is that disruption of higher-level cognitive functions such as language can be compensated for by the recruitment of domain-general networks in our brain. Examples from healthy young brains illustrate how neurostimulation can be used to temporarily interfere with efficient processing, probing short-term network plasticity at the systems level. Examples from people with dyslexia help to better understand network disorders in the language domain and outline the potential of facilitatory neurostimulation for treatment. I will also discuss examples from aging brains where plasticity helps to compensate for loss of function. Finally, examples from lesioned brains after stroke provide insight into the brain’s potential for long-term reorganization and recovery of function. Collectively, these results challenge the view of a modular organization of the human brain and argue for a flexible redistribution of function via systems plasticity.

SeminarNeuroscienceRecording

Brain Emulation Challenge Workshop

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Feb 21, 2025

Brain Emulation Challenge workshop will tackle cutting-edge topics such as ground-truthing for validation, leveraging artificial datasets generated from virtual brain tissue, and the transformative potential of virtual brain platforms, such as applied to the forthcoming Brain Emulation Challenge.

SeminarNeuroscience

Memory formation in hippocampal microcircuit

Andreakos Nikolaos
Visiting Scientist, School of Computer Science, University of Lincoln, Scientific Associate, National and Kapodistrian University of Athens
Feb 7, 2025

The centre of memory is the medial temporal lobe (MTL) and especially the hippocampus. In our research, a more flexible brain-inspired computational microcircuit of the CA1 region of the mammalian hippocampus was upgraded and used to examine how information retrieval could be affected under different conditions. Six models (1-6) were created by modulating different excitatory and inhibitory pathways. The results showed that the increase in the strength of the feedforward excitation was the most effective way to recall memories. In other words, that allows the system to access stored memories more accurately.

SeminarNeuroscience

Analyzing Network-Level Brain Processing and Plasticity Using Molecular Neuroimaging

Alan Jasanoff
Massachusetts Institute of Technology
Jan 28, 2025

Behavior and cognition depend on the integrated action of neural structures and populations distributed throughout the brain. We recently developed a set of molecular imaging tools that enable multiregional processing and plasticity in neural networks to be studied at a brain-wide scale in rodents and nonhuman primates. Here we will describe how a novel genetically encoded activity reporter enables information flow in virally labeled neural circuitry to be monitored by fMRI. Using the reporter to perform functional imaging of synaptically defined neural populations in the rat somatosensory system, we show how activity is transformed within brain regions to yield characteristics specific to distinct output projections. We also show how this approach enables regional activity to be modeled in terms of inputs, in a paradigm that we are extending to address circuit-level origins of functional specialization in marmoset brains. In the second part of the talk, we will discuss how another genetic tool for MRI enables systematic studies of the relationship between anatomical and functional connectivity in the mouse brain. We show that variations in physical and functional connectivity can be dissociated both across individual subjects and over experience. We also use the tool to examine brain-wide relationships between plasticity and activity during an opioid treatment. This work demonstrates the possibility of studying diverse brain-wide processing phenomena using molecular neuroimaging.

SeminarNeuroscience

The Brain Prize winners' webinar

Larry Abbott, Haim Sompolinsky, Terry Sejnowski
Columbia University; Harvard University / Hebrew University; Salk Institute
Nov 30, 2024

This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 29, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 29, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 16, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarNeuroscience

Probing neural population dynamics with recurrent neural networks

Chethan Pandarinath
Emory University and Georgia Tech
Jun 12, 2024

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.

SeminarNeuroscienceRecording

Characterizing the causal role of large-scale network interactions in supporting complex cognition

Michal Ramot
Weizmann Inst. of Science
May 7, 2024

Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.

SeminarNeuroscience

How are the epileptogenesis clocks ticking?

Cristina Reschke
RCSI
Apr 10, 2024

The epileptogenesis process is associated with large-scale changes in gene expression, which contribute to the remodelling of brain networks permanently altering excitability. About 80% of the protein coding genes are under the influence of the circadian rhythms. These are 24-hour endogenous rhythms that determine a large number of daily changes in physiology and behavior in our bodies. In the brain, the master clock regulates a large number of pathways that are important during epileptogenesis and established-epilepsy, such as neurotransmission, synaptic homeostasis, inflammation, blood-brain barrier among others. In-depth mapping of the molecular basis of circadian timing in the brain is key for a complete understanding of the cellular and molecular events connecting genes to phenotypes.

SeminarNeuroscience

Roles of inhibition in stabilizing and shaping the response of cortical networks

Nicolas Brunel
Duke University
Apr 5, 2024

Inhibition has long been thought to stabilize the activity of cortical networks at low rates, and to shape significantly their response to sensory inputs. In this talk, I will describe three recent collaborative projects that shed light on these issues. (1) I will show how optogenetic excitation of inhibition neurons is consistent with cortex being inhibition stabilized even in the absence of sensory inputs, and how this data can constrain the coupling strengths of E-I cortical network models. (2) Recent analysis of the effects of optogenetic excitation of pyramidal cells in V1 of mice and monkeys shows that in some cases this optogenetic input reshuffles the firing rates of neurons of the network, leaving the distribution of rates unaffected. I will show how this surprising effect can be reproduced in sufficiently strongly coupled E-I networks. (3) Another puzzle has been to understand the respective roles of different inhibitory subtypes in network stabilization. Recent data reveal a novel, state dependent, paradoxical effect of weakening AMPAR mediated synaptic currents onto SST cells. Mathematical analysis of a network model with multiple inhibitory cell types shows that this effect tells us in which conditions SST cells are required for network stabilization.

SeminarNeuroscience

The quest for brain identification

Enrico Amico
Aston University
Mar 21, 2024

In the 17th century, physician Marcello Malpighi observed the existence of distinctive patterns of ridges and sweat glands on fingertips. This was a major breakthrough, and originated a long and continuing quest for ways to uniquely identify individuals based on fingerprints, a technique massively used until today. It is only in the past few years that technologies and methodologies have achieved high-quality measures of an individual’s brain to the extent that personality traits and behavior can be characterized. The concept of “fingerprints of the brain” is very novel and has been boosted thanks to a seminal publication by Finn et al. in 2015. They were among the firsts to show that an individual’s functional brain connectivity profile is both unique and reliable, similarly to a fingerprint, and that it is possible to identify an individual among a large group of subjects solely on the basis of her or his connectivity profile. Yet, the discovery of brain fingerprints opened up a plethora of new questions. In particular, what exactly is the information encoded in brain connectivity patterns that ultimately leads to correctly differentiating someone’s connectome from anybody else’s? In other words, what makes our brains unique? In this talk I am going to partially address these open questions while keeping a personal viewpoint on the subject. I will outline the main findings, discuss potential issues, and propose future directions in the quest for identifiability of human brain networks.

SeminarNeuroscience

Epileptic micronetworks and their clinical relevance

Michael Wenzel
Bonn University
Mar 13, 2024

A core aspect of clinical epileptology revolves around relating epileptic field potentials to underlying neural sources (e.g. an “epileptogenic focus”). Yet still, how neural population activity relates to epileptic field potentials and ultimately clinical phenomenology, remains far from being understood. After a brief overview on this topic, this seminar will focus on unpublished work, with an emphasis on seizure-related focal spreading depression. The presented results will include hippocampal and neocortical chronic in vivo two-photon population imaging and local field potential recordings of epileptic micronetworks in mice, in the context of viral encephalitis or optogenetic stimulation. The findings are corroborated by invasive depth electrode recordings (macroelectrodes and BF microwires) in epilepsy patients during pre-surgical evaluation. The presented work carries general implications for clinical epileptology, and basic epilepsy research.

SeminarNeuroscience

Maintaining Plasticity in Neural Networks

Clare Lyle
DeepMind
Mar 13, 2024

Nonstationarity presents a variety of challenges for machine learning systems. One surprising pathology which can arise in nonstationary learning problems is plasticity loss, whereby making progress on new learning objectives becomes more difficult as training progresses. Networks which are unable to adapt in response to changes in their environment experience plateaus or even declines in performance in highly non-stationary domains such as reinforcement learning, where the learner must quickly adapt to new information even after hundreds of millions of optimization steps. The loss of plasticity manifests in a cluster of related empirical phenomena which have been identified by a number of recent works, including the primacy bias, implicit under-parameterization, rank collapse, and capacity loss. While this phenomenon is widely observed, it is still not fully understood. This talk will present exciting recent results which shed light on the mechanisms driving the loss of plasticity in a variety of learning problems and survey methods to maintain network plasticity in non-stationary tasks, with a particular focus on deep reinforcement learning.

SeminarNeuroscience

Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine

Nelson Spruston
Janelia, Ashburn, USA
Mar 6, 2024

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

SeminarNeuroscienceRecording

Virtual Brain Twins for Brain Medicine and Epilepsy

Viktor Jirsa
Aix Marseille Université - Inserm
Nov 8, 2023

Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.

SeminarNeuroscience

Stroke : Brain networks and behavior

Maurizio Corbetta
Department of Neuroscience, University of Padova, Italy
Nov 2, 2023
SeminarNeuroscienceRecording

Neuroinflammation in Epilepsy: what have we learned from human brain tissue specimens ?

Eleonora Aronica
Amsterdam UMC
Oct 25, 2023

Epileptogenesis is a gradual and dynamic process leading to difficult-to-treat seizures. Several cellular, molecular, and pathophysiologic mechanisms, including the activation of inflammatory processes.  The use of human brain tissue represents a crucial strategy to advance our understanding of the underlying neuropathology and the molecular and cellular basis of epilepsy and related cognitive and behavioral comorbidities,  The mounting evidence obtained during the past decade has emphasized the critical role of inflammation  in the pathophysiological processes implicated in a large spectrum of genetic and acquired forms of  focal epilepsies. Dissecting the cellular and molecular mediators of  the pathological immune responses and their convergent and divergent mechanisms, is a major requisite for delineating their role in the establishment of epileptogenic networks. The role of small regulatory molecules involved in the regulation of  specific pro- and anti-inflammatory pathways  and the crosstalk between neuroinflammation and oxidative stress will be addressed.    The observations supporting the activation of both innate and adaptive immune responses in human focal epilepsy will be discussed and elaborated, highlighting specific inflammatory pathways as potential targets for antiepileptic, disease-modifying therapeutic strategies.

SeminarNeuroscience

Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks

Thomas Nowotny
University of Sussex
Oct 18, 2023
SeminarNeuroscienceRecording

Brain network communication: concepts, models and applications

Caio Seguin
Indiana University
Aug 25, 2023

Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.

SeminarNeuroscience

In vivo direct imaging of neuronal activity at high temporospatial resolution

Jang-Yeon Park
Sungkyunkwan University, Suwon, Korea
Jun 28, 2023

Advanced noninvasive neuroimaging methods provide valuable information on the brain function, but they have obvious pros and cons in terms of temporal and spatial resolution. Functional magnetic resonance imaging (fMRI) using blood-oxygenation-level-dependent (BOLD) effect provides good spatial resolution in the order of millimeters, but has a poor temporal resolution in the order of seconds due to slow hemodynamic responses to neuronal activation, providing indirect information on neuronal activity. In contrast, electroencephalography (EEG) and magnetoencephalography (MEG) provide excellent temporal resolution in the millisecond range, but spatial information is limited to centimeter scales. Therefore, there has been a longstanding demand for noninvasive brain imaging methods capable of detecting neuronal activity at both high temporal and spatial resolution. In this talk, I will introduce a novel approach that enables Direct Imaging of Neuronal Activity (DIANA) using MRI that can dynamically image neuronal spiking activity in milliseconds precision, achieved by data acquisition scheme of rapid 2D line scan synchronized with periodically applied functional stimuli. DIANA was demonstrated through in vivo mouse brain imaging on a 9.4T animal scanner during electrical whisker-pad stimulation. DIANA with milliseconds temporal resolution had high correlations with neuronal spike activities, which could also be applied in capturing the sequential propagation of neuronal activity along the thalamocortical pathway of brain networks. In terms of the contrast mechanism, DIANA was almost unaffected by hemodynamic responses, but was subject to changes in membrane potential-associated tissue relaxation times such as T2 relaxation time. DIANA is expected to break new ground in brain science by providing an in-depth understanding of the hierarchical functional organization of the brain, including the spatiotemporal dynamics of neural networks.

SeminarNeuroscience

The role of sub-population structure in computations through neural dynamics

Srdjan Ostojic
École normale supérieure
May 19, 2023

Neural computations are currently conceptualised using two separate approaches: sorting neurons into functional sub-populations or examining distributed collective dynamics. Whether and how these two aspects interact to shape computations is currently unclear. Using a novel approach to extract computational mechanisms from recurrent networks trained on neuroscience tasks, we show that the collective dynamics and sub-population structure play fundamentally complementary roles. Although various tasks can be implemented in networks with fully random population structure, we found that flexible input–output mappings instead require a non-random population structure that can be described in terms of multiple sub-populations. Our analyses revealed that such a sub-population organisation enables flexible computations through a mechanism based on gain-controlled modulations that flexibly shape the collective dynamics.

SeminarNeuroscienceRecording

Feedback control in the nervous system: from cells and circuits to behaviour

Timothy O'Leary
Department of Engineering, University of Cambridge
May 16, 2023

The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.

SeminarNeuroscience

Quasicriticality and the quest for a framework of neuronal dynamics

Leandro Jonathan Fosque
Beggs lab, IU Bloomington
May 3, 2023

Critical phenomena abound in nature, from forest fires and earthquakes to avalanches in sand and neuronal activity. Since the 2003 publication by Beggs & Plenz on neuronal avalanches, a growing body of work suggests that the brain homeostatically regulates itself to operate near a critical point where information processing is optimal. At this critical point, incoming activity is neither amplified (supercritical) nor damped (subcritical), but approximately preserved as it passes through neural networks. Departures from the critical point have been associated with conditions of poor neurological health like epilepsy, Alzheimer's disease, and depression. One complication that arises from this picture is that the critical point assumes no external input. But, biological neural networks are constantly bombarded by external input. How is then the brain able to homeostatically adapt near the critical point? We’ll see that the theory of quasicriticality, an organizing principle for brain dynamics, can account for this paradoxical situation. As external stimuli drive the cortex, quasicriticality predicts a departure from criticality while maintaining optimal properties for information transmission. We’ll see that simulations and experimental data confirm these predictions and describe new ones that could be tested soon. More importantly, we will see how this organizing principle could help in the search for biomarkers that could soon be tested in clinical studies.

SeminarNeuroscienceRecording

Signatures of criticality in efficient coding networks

Shervin Safavi
Dayan lab, MPI for Biological Cybernetics
May 3, 2023

The critical brain hypothesis states that the brain can benefit from operating close to a second-order phase transition. While it has been shown that several computational aspects of sensory information processing (e.g., sensitivity to input) are optimal in this regime, it is still unclear whether these computational benefits of criticality can be leveraged by neural systems performing behaviorally relevant computations. To address this question, we investigate signatures of criticality in networks optimized to perform efficient encoding. We consider a network of leaky integrate-and-fire neurons with synaptic transmission delays and input noise. Previously, it was shown that the performance of such networks varies non-monotonically with the noise amplitude. Interestingly, we find that in the vicinity of the optimal noise level for efficient coding, the network dynamics exhibits signatures of criticality, namely, the distribution of avalanche sizes follows a power law. When the noise amplitude is too low or too high for efficient coding, the network appears either super-critical or sub-critical, respectively. This result suggests that two influential, and previously disparate theories of neural processing optimization—efficient coding, and criticality—may be intimately related

SeminarNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale
Princeton
May 3, 2023

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

SeminarNeuroscienceRecording

Estimating repetitive spatiotemporal patterns from resting-state brain activity data

Yusuke Takeda
Computational Brain Dynamics Team, RIKEN Center for Advanced Intelligence Project, Japan; Department of Computational Brain Imaging, ATR Neural Information Analysis Laboratories, Japan
Apr 28, 2023

Repetitive spatiotemporal patterns in resting-state brain activities have been widely observed in various species and regions, such as rat and cat visual cortices. Since they resemble the preceding brain activities during tasks, they are assumed to reflect past experiences embedded in neuronal circuits. Moreover, spatiotemporal patterns involving whole-brain activities may also reflect a process that integrates information distributed over the entire brain, such as motor and visual information. Therefore, revealing such patterns may elucidate how the information is integrated to generate consciousness. In this talk, I will introduce our proposed method to estimate repetitive spatiotemporal patterns from resting-state brain activity data and show the spatiotemporal patterns estimated from human resting-state magnetoencephalography (MEG) and electroencephalography (EEG) data. Our analyses suggest that the patterns involved whole-brain propagating activities that reflected a process to integrate the information distributed over frequencies and networks. I will also introduce our current attempt to reveal signal flows and their roles in the spatiotemporal patterns using a big dataset. - Takeda et al., Estimating repetitive spatiotemporal patterns from resting-state brain activity data. NeuroImage (2016); 133:251-65. - Takeda et al., Whole-brain propagating patterns in human resting-state brain activities. NeuroImage (2021); 245:118711.

SeminarNeuroscience

Precise spatio-temporal spike patterns in cortex and model

Sonia Gruen
Forschungszentrum Jülich, Germany
Apr 26, 2023

The cell assembly hypothesis postulates that groups of coordinated neurons form the basis of information processing. Here, we test this hypothesis by analyzing massively parallel spiking activity recorded in monkey motor cortex during a reach-to-grasp experiment for the presence of significant ms-precise spatio-temporal spike patterns (STPs). For this purpose, the parallel spike trains were analyzed for STPs by the SPADE method (Stella et al, 2019, Biosystems), which detects, counts and evaluates spike patterns for their significance by the use of surrogates (Stella et al, 2022 eNeuro). As a result we find STPs in 19/20 data sets (each of 15min) from two monkeys, but only a small fraction of the recorded neurons are involved in STPs. To consider the different behavioral states during the task, we analyzed the data in a quasi time-resolved analysis by dividing the data into behaviorally relevant time epochs. The STPs that occur in the various epochs are specific to behavioral context - in terms of neurons involved and temporal lags between the spikes of the STP. Furthermore we find, that the STPs often share individual neurons across epochs. Since we interprete the occurrence of a particular STP as the signature of a particular active cell assembly, our interpretation is that the neurons multiplex their cell assembly membership. In a related study, we model these findings by networks with embedded synfire chains (Kleinjohann et al, 2022, bioRxiv 2022.08.02.502431).

SeminarNeuroscience

Learning through the eyes and ears of a child

Brenden Lake
NYU
Apr 21, 2023

Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.

SeminarNeuroscienceRecording

Assigning credit through the "other” connectome

Eric Shea-Brown
University of Washington, Seattle
Apr 19, 2023

Learning in neural networks requires assigning the right values to thousands to trillions or more of individual connections, so that the network as a whole produces the desired behavior. Neuroscientists have gained insights into this “credit assignment” problem through decades of experimental, modeling, and theoretical studies. This has suggested key roles for synaptic eligibility traces and top-down feedback signals, among other factors. Here we study the potential contribution of another type of signaling that is being revealed in greater and greater fidelity by ongoing molecular and genomics studies. This is the set of modulatory pathways local to a given circuit, which form an intriguing second type of connectome overlayed on top of synaptic connectivity. We will share ongoing modeling and theoretical work that explores the possible roles of this local modulatory connectome in network learning.

SeminarNeuroscience

Dynamic endocrine modulation of the nervous system

Emily Jabocs
US Santa Barbara Neuroscience
Apr 18, 2023

Sex hormones are powerful neuromodulators of learning and memory. In rodents and nonhuman primates estrogen and progesterone influence the central nervous system across a range of spatiotemporal scales. Yet, their influence on the structural and functional architecture of the human brain is largely unknown. Here, I highlight findings from a series of dense-sampling neuroimaging studies from my laboratory designed to probe the dynamic interplay between the nervous and endocrine systems. Individuals underwent brain imaging and venipuncture every 12-24 hours for 30 consecutive days. These procedures were carried out under freely cycling conditions and again under a pharmacological regimen that chronically suppresses sex hormone production. First, resting state fMRI evidence suggests that transient increases in estrogen drive robust increases in functional connectivity across the brain. Time-lagged methods from dynamical systems analysis further reveals that these transient changes in estrogen enhance within-network integration (i.e. global efficiency) in several large-scale brain networks, particularly Default Mode and Dorsal Attention Networks. Next, using high-resolution hippocampal subfield imaging, we found that intrinsic hormone fluctuations and exogenous hormone manipulations can rapidly and dynamically shape medial temporal lobe morphology. Together, these findings suggest that neuroendocrine factors influence the brain over short and protracted timescales.

SeminarNeuroscience

The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures

Andrew Saxe
UCL
Apr 14, 2023

What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.

SeminarNeuroscienceRecording

More than a beast growing in a passive brain: excitation and inhibition drive epilepsy and glioma progression

Gilles Huberfeld
Hôpital Fondation Adolphe de Rothschild
Apr 12, 2023

Gliomas are brain tumors formed by networks of connected tumor cells, nested in and interacting with neuronal networks. Neuronal activities interfere with tumor growth and occurrence of seizures affects glioma prognosis, while the developing tumor triggers seizures in the infiltrated cortex. Oncometabolites produced by tumor cells and neurotransmitters affect both the generation of epileptic activities by neurons and the growth of glioma cells through synaptic-related mechanisms, involving both GABAergic / Chloride pathways and glutamatergic signaling. From a clinical sight, epilepsy occurrence is beneficial to glioma prognosis but growing tumors are epileptogenic, which constitutes a paradox. This lecture will review how inhibitory and excitatory signaling drives glioma growth and how epileptic and oncological processes are interfering, with a special focus on the human brain.

SeminarNeuroscience

From spikes to factors: understanding large-scale neural computations

Mark M. Churchland
Columbia University, New York, USA
Apr 6, 2023

It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.

SeminarNeuroscienceRecording

Developmentally structured coactivity in the hippocampal trisynaptic loop

Roman Huszár
Buzsáki Lab, New York University
Apr 5, 2023

The hippocampus is a key player in learning and memory. Research into this brain structure has long emphasized its plasticity and flexibility, though recent reports have come to appreciate its remarkably stable firing patterns. How novel information incorporates itself into networks that maintain their ongoing dynamics remains an open question, largely due to a lack of experimental access points into network stability. Development may provide one such access point. To explore this hypothesis, we birthdated CA1 pyramidal neurons using in-utero electroporation and examined their functional features in freely moving, adult mice. We show that CA1 pyramidal neurons of the same embryonic birthdate exhibit prominent cofiring across different brain states, including behavior in the form of overlapping place fields. Spatial representations remapped across different environments in a manner that preserves the biased correlation patterns between same birthdate neurons. These features of CA1 activity could partially be explained by structured connectivity between pyramidal cells and local interneurons. These observations suggest the existence of developmentally installed circuit motifs that impose powerful constraints on the statistics of hippocampal output.

SeminarNeuroscienceRecording

The strongly recurrent regime of cortical networks

David Dahmen
Jülich Research Centre, Germany
Mar 29, 2023

Modern electrophysiological recordings simultaneously capture single-unit spiking activities of hundreds of neurons. These neurons exhibit highly complex coordination patterns. Where does this complexity stem from? One candidate is the ubiquitous heterogeneity in connectivity of local neural circuits. Studying neural network dynamics in the linearized regime and using tools from statistical field theory of disordered systems, we derive relations between structure and dynamics that are readily applicable to subsampled recordings of neural circuits: Measuring the statistics of pairwise covariances allows us to infer statistical properties of the underlying connectivity. Applying our results to spontaneous activity of macaque motor cortex, we find that the underlying network operates in a strongly recurrent regime. In this regime, network connectivity is highly heterogeneous, as quantified by a large radius of bulk connectivity eigenvalues. Being close to the point of linear instability, this dynamical regime predicts a rich correlation structure, a large dynamical repertoire, long-range interaction patterns, relatively low dimensionality and a sensitive control of neuronal coordination. These predictions are verified in analyses of spontaneous activity of macaque motor cortex and mouse visual cortex. Finally, we show that even microscopic features of connectivity, such as connection motifs, systematically scale up to determine the global organization of activity in neural circuits.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 22, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarNeuroscienceRecording

Autopoiesis and Enaction in the Game of Life

Randall Beer
Indiana University
Mar 17, 2023

Enaction plays a central role in the broader fabric of so-called 4E (embodied, embedded, extended, enactive) cognition. Although the origin of the enactive approach is widely dated to the 1991 publication of the book "The Embodied Mind" by Varela, Thompson and Rosch, many of the central ideas trace to much earlier work. Over 40 years ago, the Chilean biologists Humberto Maturana and Francisco Varela put forward the notion of autopoiesis as a way to understand living systems and the phenomena that they generate, including cognition. Varela and others subsequently extended this framework to an enactive approach that places biological autonomy at the foundation of situated and embodied behavior and cognition. I will describe an attempt to place Maturana and Varela's original ideas on a firmer foundation by studying them within the context of a toy model universe, John Conway's Game of Life (GoL) cellular automata. This work has both pedagogical and theoretical goals. Simple concrete models provide an excellent vehicle for introducing some of the core concepts of autopoiesis and enaction and explaining how these concepts fit together into a broader whole. In addition, a careful analysis of such toy models can hone our intuitions about these concepts, probe their strengths and weaknesses, and move the entire enterprise in the direction of a more mathematically rigorous theory. In particular, I will identify the primitive processes that can occur in GoL, show how these can be linked together into mutually-supporting networks that underlie persistent bounded entities, map the responses of such entities to environmental perturbations, and investigate the paths of mutual perturbation that these entities and their environments can undergo.

SeminarNeuroscienceRecording

Off the rails - how pathological patterns of whole brain activity emerge in epileptic seizures

Richard Rosch
King's College London
Mar 15, 2023

In most brains across the animal kingdom, brain dynamics can enter pathological states that are recognisable as epileptic seizures. Yet usually, brain operate within certain constraints given through neuronal function and synaptic coupling, that will prevent epileptic seizure dynamics from emerging. In this talk, I will bring together different approaches to identifying how networks in the broadest sense shape brain dynamics. Using illustrative examples from intracranial EEG recordings, disorders characterised by molecular disruption of a single neurotransmitter receptor type, to single-cell recordings of whole-brain activity in the larval zebrafish, I will address three key questions - (1) how does the regionally specific composition of synaptic receptors shape ongoing physiological brain activity; (2) how can disruption of this regionally specific balance result in abnormal brain dynamics; and (3) which cellular patterns underly the transition into an epileptic seizure.

ePosterNeuroscience

Computing in neuronal networks with plasticity via all-optical bidirectional interfacing

Andrey Formozov, J. Simon Wiegert

Bernstein Conference 2024

ePosterNeuroscience

Cooperative coding of continuous variables in networks with sparsity constraint

Paul Züge, Raoul-Martin Memmesheimer

Bernstein Conference 2024

ePosterNeuroscience

Is the cortical dynamics ergodic? A numerical study in partially-symmetric networks of spiking neurons

Ferdinand Tixidre, Gianluigi Mongillo, Alessandro Torcini

Bernstein Conference 2024

ePosterNeuroscience

Critical organisation for complex temporal tasks in neural networks

Gayathri Ramesan, Akhilesh Nandan, Daniel Koch, Aneta Koseska

Bernstein Conference 2024

ePosterNeuroscience

Deep generative networks as a computational approach for global non-linear control modeling in the nematode C. elegans

Doris Voina, Steven Brunton, Jose Kutz

Bernstein Conference 2024

ePosterNeuroscience

Dendrites endow artificial neural networks with accurate, robust and parameter-efficient learning

Spyridon Chavlis, Panayiota Poirazi

Bernstein Conference 2024

ePosterNeuroscience

DelGrad: Exact gradients in spiking networks for learning transmission delays and weights

Julian Göltz, Jimmy Weber, Laura Kriener, Peter Lake, Melika Payvand, Mihai Petrovici

Bernstein Conference 2024

ePosterNeuroscience

Dynamical representations between biologically plausible and implausible task-trained neural networks

Matthew Getz, Julijana Gjorgjieva

Bernstein Conference 2024

ePosterNeuroscience

Effect of experience on context-dependent learning in recurrent networks

John Bowler, Hyunwoo Lee, James Heys

Bernstein Conference 2024

ePosterNeuroscience

Efficient cortical spike train decoding for brain-machine interface implants with recurrent spiking neural networks

Tengjun Liu, Julia Gygax, Julian Rossbroich, Yansong Chua, Shaomin Zhang, Friedemann Zenke

Bernstein Conference 2024

ePosterNeuroscience

Efficient learning of deep non-negative matrix factorisation networks

Mahbod Nouri, David Rotermund, Alberto García Ortiz, Klaus Pawelzik

Bernstein Conference 2024

ePosterNeuroscience

Emergence of Synfire Chains in Functional Multi-Layer Spiking Neural Networks

Jonas Oberste-Frielinghaus, Anno Kurth, Julian Göltz, Laura Kriener, Junji Ito, Mihai Petrovici, Sonja Grün

Bernstein Conference 2024

ePosterNeuroscience

Enhancing learning through neuromodulation-aware spiking neural networks

Alejandro Rodriguez-Garcia, Srikanth Ramaswamy

Bernstein Conference 2024

ePosterNeuroscience

Experiment-based Models to Study Local Learning Rules for Spiking Neural Networks

Giulia Amos, Maria Saramago, Alexandre Suter, Tim Schmid, Jens Duru, Sean Weaver, Benedikt Maurer, Stephan Ihle, Janos Vörös, Katarina Vulić

Bernstein Conference 2024

ePosterNeuroscience

A family of synaptic plasticity rules based on spike times produces a diversity of triplet motifs in recurrent networks

Claudia Cusseddu, Dylan Festa, Christoph Miehl, Julijana Gjorgjieva

Bernstein Conference 2024

ePosterNeuroscience

Finding spots despite disorder? Quantifying positional information in continuous attractor networks

Tobias Kühn, Rémi Monasson

Bernstein Conference 2024

ePosterNeuroscience

A feedback control algorithm for online learning in Spiking Neural Networks and Neuromorphic devices

Matteo Saponati, Chiara De Luca, Giacomo Indiveri, Benjamin Grewe

Bernstein Conference 2024

ePosterNeuroscience

Identifying task-specific dynamics in recurrent neural networks using Dynamical Similarity Analysis

Alireza Ghalambor, Mohammad Taha Fakharian, Roxana Zeraati, Shervin Safavi

Bernstein Conference 2024

ePosterNeuroscience

Inferring stochastic low-rank recurrent neural networks from neural data

Matthijs Pals, A Sağtekin, Felix Pei, Manuel Gloeckler, Jakob Macke

Bernstein Conference 2024

ePosterNeuroscience

Integrating Biological and Artificial Neural Networks for Solving Non-Linear Problems

Katarina Vulić, Joël Küchler, Jona Schulz, Haotian Yao, Christian Valmaggia, Sean Weaver, Stephan Ihle, Jens Duru, Janos Vörös

Bernstein Conference 2024

ePosterNeuroscience

Intrinsic dimension of neural activity: comparing artificial and biological neural networks

Jacopo Fadanni, Giacomo Gasparotto, Rosalba Pacelli, Marco Dal Maschio, Marco Salamanca, Marica Albanesi, Pietro Rotondo, Michele Allegra

Bernstein Conference 2024

ePosterNeuroscience

Knocking out co-active plasticity rules in neural networks reveals synapse type-specific contributions for learning and memory

Zoe Harrington, Basile Confavreux, Pedro Gonçalves, Jakob Macke, Tim Vogels

Bernstein Conference 2024

ePosterNeuroscience

Learning Hebbian/Anti-Hebbian networks in continuous time

Henrique Reis Aguiar, Matthias Hennig

Bernstein Conference 2024

ePosterNeuroscience

Linking causal and structural connectivity in nonlinear networks

Kai Chen, Songting Li, Douglas Zhou

Bernstein Conference 2024

ePosterNeuroscience

Linking Neural Manifolds to Circuit Structure in Recurrent Networks

Louis Pezon, Valentin Schmutz, Wulfram Gerstner

Bernstein Conference 2024

ePosterNeuroscience

Maximizing memory capacity in heterogeneous networks

Kaining Zhang, Gaia Tavoni

Bernstein Conference 2024

ePosterNeuroscience

Local E/I Balance and Spontaneous Dynamics in Neuronal Networks

Shreya Agarwal, Richmond Crisostomo, Ulrich Egert, Samora Okujeni

Bernstein Conference 2024

ePosterNeuroscience

Neuronal spike generation via a homoclinic orbit bifurcation increases irregularity and chaos in balanced networks

Moritz Drangmeister, Rainer Engelken, Jan-Hendrik Schleimer, Susanne Schreiber

Bernstein Conference 2024

ePosterNeuroscience

Parameter specification in spiking neural networks using simulation-based inference

Daniel Todt, Sandra Diaz, Abigail Morrison

Bernstein Conference 2024

ePosterNeuroscience

Origin and function of gamma oscillatory complexity in hippocampal networks

Matteo di Volo, Vincent Douchamps, Alessandro Torcini, Demian Battaglia, Romain Goutagny

Bernstein Conference 2024

ePosterNeuroscience

Plastic Arbor: a modern simulation framework for synaptic plasticity – from single synapses to networks of morphological neurons

Jannik Luboeinski, Sebastian Schmitt, Shirin Shafiee Kamalabad, Thorsten Hater, Fabian Bösch, Christian Tetzlaff

Bernstein Conference 2024

ePosterNeuroscience

Plasticity-driven circuit self-organization on spiking stabilized supralinear networks

Raul Adell Segarra, Dylan Festa, Dimitra Maoutsa, Julijana Gjorgjieva

Bernstein Conference 2024

ePosterNeuroscience

Response variability can accelerate learning in feedforward-recurrent networks

Sigrid Trägenap, Matthias Kaschube

Bernstein Conference 2024

ePosterNeuroscience

'Reusers' and 'Unlearners' display distinct effects of forgetting on reversal learning in neural networks

Jonas Elpelt, Jens-Bastian Eppler, Johannes Seiler, Simon Rumpel, Matthias Kaschube

Bernstein Conference 2024

ePosterNeuroscience

The role of gap junctions and clustered connectivity in emergent synchronisation patterns of spiking inhibitory neuronal networks

Helene Todd, Boris Gutkin, Alex Cayco-Gajic

Bernstein Conference 2024

ePosterNeuroscience

On The Role Of Temporal Hierarchy In Spiking Neural Networks

Filippo Moro, Pau Aceituno, Melika Payvand

Bernstein Conference 2024

ePosterNeuroscience

Shaping Low-Rank Recurrent Neural Networks with Biological Learning Rules

Pablo Crespo, Dimitra Maoutsa, Matthew Getz, Julijana Gjorgjieva

Bernstein Conference 2024

ePosterNeuroscience

Seamless Deployment of Pre-trained Spiking Neural Networks onto SpiNNaker2

Bernhard Vogginger, Francesco Negri, Mahmoud Akl, Hector Gonzalez

Bernstein Conference 2024

ePosterNeuroscience

Smooth exact gradient descent learning in spiking neural networks

Christian Klos, Raoul-Martin Memmesheimer

Bernstein Conference 2024

ePosterNeuroscience

Biological-plausible learning with a two compartment neuron model in recurrent neural networks

Timo Oess, Daniel Schmid, Heiko Neumann

Bernstein Conference 2024

networks coverage

90 items

Seminar50
ePoster40
Domain spotlight

Explore how networks research is advancing inside Neuro.

Visit domain