← Back

Architecture

Topic spotlight
TopicWorld Wide

architecture

Discover seminars, jobs, and research tagged with architecture across World Wide.
97 curated items60 Seminars37 ePosters
Updated 2 months ago
97 items · architecture
97 results
SeminarNeuroscience

Astrocytes: From Metabolism to Cognition

Juan P. Bolanos
Professor of Biochemistry and Molecular Biology, University of Salamanca
Oct 2, 2025

Different brain cell types exhibit distinct metabolic signatures that link energy economy to cellular function. Astrocytes and neurons, for instance, diverge dramatically in their reliance on glycolysis versus oxidative phosphorylation, underscoring that metabolic fuel efficiency is not uniform across cell types. A key factor shaping this divergence is the structural organization of the mitochondrial respiratory chain into supercomplexes. Specifically, complexes I (CI) and III (CIII) form a CI–CIII supercomplex, but the degree of this assembly varies by cell type. In neurons, CI is predominantly integrated into supercomplexes, resulting in highly efficient mitochondrial respiration and minimal reactive oxygen species (ROS) generation. Conversely, in astrocytes, a larger fraction of CI remains unassembled, freely existing apart from CIII, leading to reduced respiratory efficiency and elevated mitochondrial ROS production. Despite this apparent inefficiency, astrocytes boast a highly adaptable metabolism capable of responding to diverse stressors. Their looser CI–CIII organization allows for flexible ROS signaling, which activates antioxidant programs via transcription factors like Nrf2. This modular architecture enables astrocytes not only to balance energy production but also to support neuronal health and influence complex organismal behaviors.

SeminarNeuroscience

How the presynapse forms and functions”

Volker Haucke
Department of Molecular Pharmacology & Cell Biology, Leibniz Institute, Berlin, Germany
Aug 27, 2025

Nervous system function relies on the polarized architecture of neurons, established by directional transport of pre- and postsynaptic cargoes. While delivery of postsynaptic components depends on the secretory pathway, the identity of the membrane compartment(s) that supply presynaptic active zone (AZ) and synaptic vesicle (SV) proteins is largely unknown. I will discuss our recent advances in our understanding of how key components of the presynaptic machinery for neurotransmitter release are transported and assembled focussing on our studies in genome-engineered human induced pluripotent stem cell-derived neurons. Specifically, I will focus on the composition and cell biological identity of the axonal transport vesicles that shuttle key components of neurotransmission to nascent synapses and on machinery for axonal transport and its control by signaling lipids. Our studies identify a crucial mechanism mediating the delivery of SV and active zone proteins to developing synapses and reveal connections to neurological disorders. In the second part of my talk, I will discuss how exocytosis and endocytosis are coupled to maintain presynaptic membrane homeostasis. I will present unpublished data regarding the role of membrane tension in the coupling of exocytosis and endocytosis at synapses. We have identified an endocytic BAR domain protein that is capable of sensing alterations in membrane tension caused by the exocytotic fusion of SVs to initiate compensatory endocytosis to restore plasma membrane area. Interference with this mechanism results in defects in the coupling of presynaptic exocytosis and SV recycling at human synapses.

SeminarNeuroscience

Neural circuits underlying sleep structure and functions

Antoine Adamantidis
University of Bern
Jun 12, 2025

Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.

SeminarNeuroscience

From Spiking Predictive Coding to Learning Abstract Object Representation

Prof. Jochen Triesch
Frankfurt Institute for Advanced Studies
Jun 11, 2025

In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.

SeminarNeuroscience

Neural architectures: what are they good for anyway?

Dan Goodman
Imperial College London
Feb 11, 2025

The brain has a highly complex structure in terms of cell types and wiring between different regions. What is it for, if anything? I'll start this talk by asking what might an answer to this question even look like given that we can't run an alternative universe where our brains are structured differently. (Preview: we can do this with models!) I'll then talk about some of our work in two areas: (1) does the modular structure of the brain contribute to specialisation of function? (2) how do different cell types and architectures contribute to multimodal sensory processing?

SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 15, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarOpen SourceRecording

Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms

Priya Soneji
Georgia Institute of Technology
Oct 7, 2024

Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.

SeminarNeuroscienceRecording

Principles of Cognitive Control over Task Focus and Task

Tobias Egner
Duke University, USA
Sep 10, 2024

2024 BACN Mid-Career Prize Lecture Adaptive behavior requires the ability to focus on a current task and protect it from distraction (cognitive stability), and to rapidly switch tasks when circumstances change (cognitive flexibility). How people control task focus and switch-readiness has therefore been the target of burgeoning research literatures. Here, I review and integrate these literatures to derive a cognitive architecture and functional rules underlying the regulation of stability and flexibility. I propose that task focus and switch-readiness are supported by independent mechanisms whose strategic regulation is nevertheless governed by shared principles: both stability and flexibility are matched to anticipated challenges via an incremental, online learner that nudges control up or down based on the recent history of task demands (a recency heuristic), as well as via episodic reinstatement when the current context matches a past experience (a recognition heuristic).

SeminarNeuroscience

Generative models for video games (rescheduled)

Katja Hoffman
Microsoft Research
May 21, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Generative models for video games

Katja Hoffman
Microsoft Research
Apr 30, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Connectome-based models of neurodegenerative disease

Jacob Vogel
Lund University
Dec 4, 2023

Neurodegenerative diseases involve accumulation of aberrant proteins in the brain, leading to brain damage and progressive cognitive and behavioral dysfunction. Many gaps exist in our understanding of how these diseases initiate and how they progress through the brain. However, evidence has accumulated supporting the hypothesis that aberrant proteins can be transported using the brain’s intrinsic network architecture — in other words, using the brain’s natural communication pathways. This theory forms the basis of connectome-based computational models, which combine real human data and theoretical disease mechanisms to simulate the progression of neurodegenerative diseases through the brain. In this talk, I will first review work leading to the development of connectome-based models, and work from my lab and others that have used these models to test hypothetical modes of disease progression. Second, I will discuss the future and potential of connectome-based models to achieve clinically useful individual-level predictions, as well as to generate novel biological insights into disease progression. Along the way, I will highlight recent work by my lab and others that is already moving the needle toward these lofty goals.

SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 20, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarPsychology

Enhancing Qualitative Coding with Large Language Models: Potential and Challenges

Kim Uittenhove & Olivier Mucchiut
AFC Lab / University of Lausanne
Oct 15, 2023

Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.

SeminarOpen SourceRecording

OpenSFDI: an open hardware project for label-free measurements of tissue optical properties with spatial frequency domain imaging

Darren Roblyer
Boston University
Jun 27, 2023

Spatial frequency domain imaging (SFDI) is a diffuse optical measurement technique that can quantify tissue optical absorption and reduced scattering on a pixel by-pixel basis. Measurements of absorption at different wavelengths enable the extraction of molar concentrations of tissue chromophores over a wide field, providing a noncontact and label-free means to assess tissue viability, oxygenation, microarchitecture, and molecular content. In this talk, I will describe openSFDI, an open-source guide for building a low-cost, small-footprint, multi-wavelength SFDI system capable of quantifying absorption and reduced scattering as well as oxyhemoglobin and deoxyhemoglobin concentrations in biological tissue. The openSFDI project has a companion website which provides a complete parts list along with detailed instructions for assembling the openSFDI system. I will also review several technological advances our lab has recently made, including the extension of SFDI to the shortwave infrared wavelength band (900-1300 nm), where water and lipids provide strong contrast. Finally, I will discuss several preclinical and clinical applications for SFDI, including applications related to cancer, dermatology, rheumatology, cardiovascular disease, and others.

SeminarNeuroscience

NOTE: DUE TO A CYBER ATTACK OUR UNIVERSITY WEB SYSTEM IS SHUT DOWN - TALK WILL BE RESCHEDULED

Susanne Schoch McGovern
Universität Bonn
Jun 6, 2023

The size and structure of the dendritic arbor play important roles in determining how synaptic inputs of neurons are converted to action potential output and how neurons are integrated in the surrounding neuronal network. Accordingly, neurons with aberrant morphology have been associated with neurological disorders. Dysmorphic, enlarged neurons are, for example, a hallmark of focal epileptogenic lesions like focal cortical dysplasia (FCDIIb) and gangliogliomas (GG). However, the regulatory mechanisms governing the development of dendrites are insufficiently understood. The evolutionary conserved Ste20/Hippo kinase pathway has been proposed to play an important role in regulating the formation and maintenance of dendritic architecture. A key element of this pathway, Ste20-like kinase (SLK), regulates cytoskeletal dynamics in non-neuronal cells and is strongly expressed throughout neuronal development. Nevertheless, its function in neurons is unknown. We found that during development of mouse cortical neurons, SLK has a surprisingly specific role for proper elaboration of higher, ≥ 3rd, order dendrites both in cultured neurons and living mice. Moreover, SLK is required to maintain excitation-inhibition balance. Specifically, SLK knockdown causes a selective loss of inhibitory synapses and functional inhibition after postnatal day 15, while excitatory neurotransmission is unaffected. This mechanism may be relevant for human disease, as dysmorphic neurons within human cortical malformations exhibit significant loss of SLK expression. To uncover the signaling cascades underlying the action of SLK, we combined phosphoproteomics, protein interaction screens and single cell RNA seq. Overall, our data identifies SLK as a key regulator of both dendritic complexity during development and of inhibitory synapse maintenance.

SeminarNeuroscience

Dynamic endocrine modulation of the nervous system

Emily Jabocs
US Santa Barbara Neuroscience
Apr 17, 2023

Sex hormones are powerful neuromodulators of learning and memory. In rodents and nonhuman primates estrogen and progesterone influence the central nervous system across a range of spatiotemporal scales. Yet, their influence on the structural and functional architecture of the human brain is largely unknown. Here, I highlight findings from a series of dense-sampling neuroimaging studies from my laboratory designed to probe the dynamic interplay between the nervous and endocrine systems. Individuals underwent brain imaging and venipuncture every 12-24 hours for 30 consecutive days. These procedures were carried out under freely cycling conditions and again under a pharmacological regimen that chronically suppresses sex hormone production. First, resting state fMRI evidence suggests that transient increases in estrogen drive robust increases in functional connectivity across the brain. Time-lagged methods from dynamical systems analysis further reveals that these transient changes in estrogen enhance within-network integration (i.e. global efficiency) in several large-scale brain networks, particularly Default Mode and Dorsal Attention Networks. Next, using high-resolution hippocampal subfield imaging, we found that intrinsic hormone fluctuations and exogenous hormone manipulations can rapidly and dynamically shape medial temporal lobe morphology. Together, these findings suggest that neuroendocrine factors influence the brain over short and protracted timescales.

SeminarNeuroscience

The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures

Andrew Saxe
UCL
Apr 13, 2023

What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.

SeminarNeuroscienceRecording

Asymmetric signaling across the hierarchy of cytoarchitecture within the human connectome

Linden Parkes
Rutgers Brain Health Institute
Mar 21, 2023

Cortical variations in cytoarchitecture form a sensory-fugal axis that shapes regional profiles of extrinsic connectivity and is thought to guide signal propagation and integration across the cortical hierarchy. While neuroimaging work has shown that this axis constrains local properties of the human connectome, it remains unclear whether it also shapes the asymmetric signaling that arises from higher-order topology. Here, we used network control theory to examine the amount of energy required to propagate dynamics across the sensory-fugal axis. Our results revealed an asymmetry in this energy, indicating that bottom-up transitions were easier to complete compared to top-down. Supporting analyses demonstrated that asymmetries were underpinned by a connectome topology that is wired to support efficient bottom-up signaling. Lastly, we found that asymmetries correlated with differences in communicability and intrinsic neuronal time scales and lessened throughout youth. Our results show that cortical variation in cytoarchitecture may guide the formation of macroscopic connectome topology.

SeminarNeuroscienceRecording

Predictive modeling, cortical hierarchy, and their computational implications

Choong-Wan Woo & Seok-Jun Hong
Sungkyunkwan University
Jan 16, 2023

Predictive modeling and dimensionality reduction of functional neuroimaging data have provided rich information about the representations and functional architectures of the human brain. While these approaches have been effective in many cases, we will discuss how neglecting the internal dynamics of the brain (e.g., spontaneous activity, global dynamics, effective connectivity) and its underlying computational principles may hinder our progress in understanding and modeling brain functions. By reexamining evidence from our previous and ongoing work, we will propose new hypotheses and directions for research that consider both internal dynamics and the computational principles that may govern brain processes.

SeminarNeuroscienceRecording

Connecting performance benefits on visual tasks to neural mechanisms using convolutional neural networks

Grace Lindsay
New York University (NYU)
Dec 6, 2022

Behavioral studies have demonstrated that certain task features reliably enhance classification performance for challenging visual stimuli. These include extended image presentation time and the valid cueing of attention. Here, I will show how convolutional neural networks can be used as a model of the visual system that connects neural activity changes with such performance changes. Specifically, I will discuss how different anatomical forms of recurrence can account for better classification of noisy and degraded images with extended processing time. I will then show how experimentally-observed neural activity changes associated with feature attention lead to observed performance changes on detection tasks. I will also discuss the implications these results have for how we identify the neural mechanisms and architectures important for behavior.

SeminarNeuroscienceRecording

Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties

Ilenna Jones
University of Pennsylvania
Dec 6, 2022

Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.

SeminarNeuroscience

‘The functional nano-architecture of axonal actin’

Christophe Leterrier
Neuropathophysiology Institute (INP), University of Marseille
Nov 30, 2022
SeminarNeuroscienceRecording

Bridging the gap between artificial models and cortical circuits

C. B. Currin
IST Austria
Nov 9, 2022

Artificial neural networks simplify complex biological circuits into tractable models for computational exploration and experimentation. However, the simplification of artificial models also undermines their applicability to real brain dynamics. Typical efforts to address this mismatch add complexity to increasingly unwieldy models. Here, we take a different approach; by reducing the complexity of a biological cortical culture, we aim to distil the essential factors of neuronal dynamics and plasticity. We leverage recent advances in growing neurons from human induced pluripotent stem cells (hiPSCs) to analyse ex vivo cortical cultures with only two distinct excitatory and inhibitory neuron populations. Over 6 weeks of development, we record from thousands of neurons using high-density microelectrode arrays (HD-MEAs) that allow access to individual neurons and the broader population dynamics. We compare these dynamics to two-population artificial networks of single-compartment neurons with random sparse connections and show that they produce similar dynamics. Specifically, our model captures the firing and bursting statistics of the cultures. Moreover, tightly integrating models and cultures allows us to evaluate the impact of changing architectures over weeks of development, with and without external stimuli. Broadly, the use of simplified cortical cultures enables us to use the repertoire of theoretical neuroscience techniques established over the past decades on artificial network models. Our approach of deriving neural networks from human cells also allows us, for the first time, to directly compare neural dynamics of disease and control. We found that cultures e.g. from epilepsy patients tended to have increasingly more avalanches of synchronous activity over weeks of development, in contrast to the control cultures. Next, we will test possible interventions, in silico and in vitro, in a drive for personalised approaches to medical care. This work starts bridging an important theoretical-experimental neuroscience gap for advancing our understanding of mammalian neuron dynamics.

SeminarNeuroscienceRecording

Training Dynamic Spiking Neural Network via Forward Propagation Through Time

B. Yin
CWI
Nov 9, 2022

With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance competitive with standard recurrent neural networks. Still, these learning algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models, and are incompatible with online learning.Taking inspiration from the concept of Liquid Time-Constant (LTCs), we introduce a novel class of spiking neurons, the Liquid Time-Constant Spiking Neuron (LTC-SN), resulting in functionality similar to the gating operation in LSTMs. We integrate these neurons in SNNs that are trained with FPTT and demonstrate that thus trained LTC-SNNs outperform various SNNs trained with BPTT on long sequences while enabling online learning and drastically reducing memory complexity. We show this for several classical benchmarks that can easily be varied in sequence length, like the Add Task and the DVS-gesture benchmark. We also show how FPTT-trained LTC-SNNs can be applied to large convolutional SNNs, where we demonstrate novel state-of-the-art for online learning in SNNs on a number of standard benchmarks (S-MNIST, R-MNIST, DVS-GESTURE) and also show that large feedforward SNNs can be trained successfully in an online manner to near (Fashion-MNIST, DVS-CIFAR10) or exceeding (PS-MNIST, R-MNIST) state-of-the-art performance as obtained with offline BPTT. Finally, the training and memory efficiency of FPTT enables us to directly train SNNs in an end-to-end manner at network sizes and complexity that was previously infeasible: we demonstrate this by training in an end-to-end fashion the first deep and performant spiking neural network for object localization and recognition. Taken together, we out contribution enable for the first time training large-scale complex spiking neural network architectures online and on long temporal sequences.

SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 8, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscienceRecording

Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing

A. Subramoney
University of Bochum
Nov 8, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.

SeminarNeuroscienceRecording

Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity

Thomas Limbacher
TU Graz
Nov 8, 2022

Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network (SNN) architecture that is enriched by Hebbian synaptic plasticity. We experimentally show that our memory-equipped SNN model outperforms state-of-the-art deep learning mechanisms in a sequential pattern-memorization task, as well as demonstrate superior out-of-distribution generalization capabilities compared to these models. We further show that our model can be successfully applied to one-shot learning and classification of handwritten characters, improving over the state-of-the-art SNN model. We also demonstrate the capability of our model to learn associations for audio to image synthesis from spoken and handwritten digits. Our SNN model further presents a novel solution to a variety of cognitive question answering tasks from a standard benchmark, achieving comparable performance to both memory-augmented ANN and SNN-based state-of-the-art solutions to this problem. Finally we demonstrate that our model is able to learn from rewards on an episodic reinforcement learning task and attain near-optimal strategy on a memory-based card game. Hence, our results show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. Since local Hebbian plasticity can easily be implemented in neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.

SeminarNeuroscienceRecording

Associative memory of structured knowledge

Julia Steinberg
Princeton University
Oct 25, 2022

A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.

SeminarNeuroscienceRecording

From Machine Learning to Autonomous Intelligence

Yann Le Cun
Meta-FAIR & Meta AI
Oct 18, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.

SeminarNeuroscience

From Machine Learning to Autonomous Intelligence

Yann LeCun
Meta Fair
Oct 9, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf

SeminarNeuroscienceRecording

General purpose event-based architectures for deep learning

Anand Subramoney
Institute for Neural Computation
Oct 4, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 4, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarNeuroscienceRecording

The functional architecture of the human entorhinal-hippocampal circuitry

Xenia Grande
Düzel Lab, University Magdeburg & German Center for Neurodegenerative Diseases
Jul 5, 2022

Cognitive functions like episodic memory require the formation of cohesive representations. Critical for that process is the entorhinal-hippocampal circuitry’s interaction with cortical information streams and the circuitry’s inner communication. With ultra-high field functional imaging we investigated the functional architecture of the human entorhinal-hippocampal circuitry. We identified an organization that is consistent with convergence of information in anterior and lateral entorhinal subregions and the subiculum/CA1 border while keeping a second route specific for scene processing in a posterior-medial entorhinal subregion and the distal subiculum. Our findings agree with information flow along information processing routes which functionally split the entorhinal-hippocampal circuitry along its transversal axis. My talk will demonstrate how ultra-high field imaging in humans can bridge the gap between anatomical and electrophysiological findings in rodents and our understanding of human cognition. Moreover, I will point out the implications that basic research on functional architecture has for cognitive and clinical research perspectives.

SeminarNeuroscienceRecording

Exploring mechanisms of human brain expansion in cerebral organoids

Madeline Lancaster
MRC Laboratory of Molecular Biology, Cambridge
May 16, 2022

The human brain sets us apart as a species, with its size being one of its most striking features. Brain size is largely determined during development as vast numbers of neurons and supportive glia are generated. In an effort to better understand the events that determine the human brain’s cellular makeup, and its size, we use a human model system in a dish, called cerebral organoids. These 3D tissues are generated from pluripotent stem cells through neural differentiation and a supportive 3D microenvironment to generate organoids with the same tissue architecture as the early human fetal brain. Such organoids are allowing us to tackle questions previously impossible with more traditional approaches. Indeed, our recent findings provide insight into regulation of brain size and neuron number across ape species, identifying key stages of early neural stem cell expansion that set up a larger starting cell number to enable the production of increased numbers of neurons. We are also investigating the role of extrinsic regulators in determining numbers and types of neurons produced in the human cerebral cortex. Overall, our findings are pointing to key, human-specific aspects of brain development and function, that have important implications for neurological disease.

SeminarNeuroscienceRecording

A draft connectome for ganglion cell types of the mouse retina

David Berson
Brown University
May 15, 2022

The visual system of the brain is highly parallel in its architecture. This is clearly evident in the outputs of the retina, which arise from neurons called ganglion cells. Work in our lab has shown that mammalian retinas contain more than a dozen distinct types of ganglion cells. Each type appears to filter the retinal image in a unique way and to relay this processed signal to a specific set of targets in the brain. My students and I are working to understand the meaning of this parallel organization through electrophysiological and anatomical studies. We record from light-responsive ganglion cells in vitro using the whole-cell patch method. This allows us to correlate directly the visual response properties, intrinsic electrical behavior, synaptic pharmacology, dendritic morphology and axonal projections of single neurons. Other methods used in the lab include neuroanatomical tracing techniques, single-unit recording and immunohistochemistry. We seek to specify the total number of ganglion cell types, the distinguishing characteristics of each type, and the intraretinal mechanisms (structural, electrical, and synaptic) that shape their stimulus selectivities. Recent work in the lab has identified a bizarre new ganglion cell type that is also a photoreceptor, capable of responding to light even when it is synaptically uncoupled from conventional (rod and cone) photoreceptors. These ganglion cells appear to play a key role in resetting the biological clock. It is just this sort of link, between a specific cell type and a well-defined behavioral or perceptual function, that we seek to establish for the full range of ganglion cell types. My research concerns the structural and functional organization of retinal ganglion cells, the output cells of the retina whose axons make up the optic nerve. Ganglion cells exhibit great diversity both in their morphology and in their responses to light stimuli. On this basis, they are divisible into a large number of types (>15). Each ganglion-cell type appears to send its outputs to a specific set of central visual nuclei. This suggests that ganglion cell heterogeneity has evolved to provide each visual center in the brain with pre-processed representations of the visual scene tailored to its specific functional requirements. Though the outline of this story has been appreciated for some time, it has received little systematic exploration. My laboratory is addressing in parallel three sets of related questions: 1) How many types of ganglion cells are there in a typical mammalian retina and what are their structural and functional characteristics? 2) What combination of synaptic networks and intrinsic membrane properties are responsible for the characteristic light responses of individual types? 3) What do the functional specializations of individual classes contribute to perceptual function or to visually mediated behavior? To pursue these questions, we label retinal ganglion cells by retrograde transport from the brain; analyze in vitro their light responses, intrinsic membrane properties and synaptic pharmacology using the whole-cell patch clamp method; and reveal their morphology with intracellular dyes. Recently, we have discovered a novel ganglion cell in rat retina that is intrinsically photosensitive. These ganglion cells exhibit robust light responses even when all influences from classical photoreceptors (rods and cones) are blocked, either by applying pharmacological agents or by dissociating the ganglion cell from the retina. These photosensitive ganglion cells seem likely to serve as photoreceptors for the photic synchronization of circadian rhythms, the mechanism that allows us to overcome jet lag. They project to the circadian pacemaker of the brain, the suprachiasmatic nucleus of the hypothalamus. Their temporal kinetics, threshold, dynamic range, and spectral tuning all match known properties of the synchronization or "entrainment" mechanism. These photosensitive ganglion cells innervate various other brain targets, such as the midbrain pupillary control center, and apparently contribute to a host of behavioral responses to ambient lighting conditions. These findings help to explain why circadian and pupillary light responses persist in mammals, including humans, with profound disruption of rod and cone function. Ongoing experiments are designed to elucidate the phototransduction mechanism, including the identity of the photopigment and the nature of downstream signaling pathways. In other studies, we seek to provide a more detailed characterization of the photic responsiveness and both morphological and functional evidence concerning possible interactions with conventional rod- and cone-driven retinal circuits. These studies are of potential value in understanding and designing appropriate therapies for jet lag, the negative consequences of shift work, and seasonal affective disorder.

SeminarNeuroscience

The Synaptome Architecture of the Brain: Lifespan, disease, evolution and behavior

Seth Grant
Professor of Molecular Neuroscience, Centre for Clinical Brain Sciences, University of Edinburgh, UK
May 1, 2022

The overall aim of my research is to understand how the organisation of the synapse, with particular reference to the postsynaptic proteome (PSP) of excitatory synapses in the brain, informs the fundamental mechanisms of learning, memory and behaviour and how these mechanisms go awry in neurological dysfunction. The PSP indeed bears a remarkable burden of disease, with components being disrupted in disorders (synaptopathies) including schizophrenia, depression, autism and intellectual disability. Our work has been fundamental in revealing and then characterising the unprecedented complexity (>1000 highly conserved proteins) of the PSP in terms of the subsynaptic architecture of postsynaptic proteins such as PSD95 and how these proteins assemble into complexes and supercomplexes in different neurons and regions of the brain. Characterising the PSPs in multiple species, including human and mouse, has revealed differences in key sets of functionally important proteins, correlates with brain imaging and connectome data, and a differential distribution of disease-relevant proteins and pathways. Such studies have also provided important insight into synapse evolution, establishing that vertebrate behavioural complexity is a product of the evolutionary expansion in synapse proteomes that occurred ~500 million years ago. My lab has identified many mutations causing cognitive impairments in mice before they were found to cause human disorders. Our proteomic studies revealed that >130 brain diseases are caused by mutations affecting postsynaptic proteins. We uncovered mechanisms that explain the polygenic basis and age of onset of schizophrenia, with postsynaptic proteins, including PSD95 supercomplexes, carrying much of the polygenic burden. We discovered the “Genetic Lifespan Calendar”, a genomic programme controlling when genes are regulated. We showed that this could explain how schizophrenia susceptibility genes are timed to exert their effects in young adults. The Genes to Cognition programme is the largest genetic study so far undertaken into the synaptic molecular mechanisms underlying behaviour and physiology. We made important conceptual advances that inform how the repertoire of both innate and learned behaviours is built from unique combinations of postsynaptic proteins that either amplify or attenuate the behavioural response. This constitutes a key advance in understanding how the brain decodes information inherent in patterns of nerve impulses, and provides insight into why the PSP has evolved to be so complex, and consequently why the phenotypes of synaptopathies are so diverse. Our most recent work has opened a new phase, and scale, in understanding synapses with the first synaptome maps of the brain. We have developed next-generation methods (SYNMAP) that enable single-synapse resolution molecular mapping across the whole mouse brain and extensive regions of the human brain, revealing the molecular and morphological features of a billion synapses. This has already uncovered unprecedented spatiotemporal synapse diversity organised into an architecture that correlates with the structural and functional connectomes, and shown how mutations that cause cognitive disorders reorganise these synaptome maps; for example, by detecting vulnerable synapse subtypes and synapse loss in Alzheimer’s disease. This innovative synaptome mapping technology has huge potential to help characterise how the brain changes during normal development, including in specific cell types, and with degeneration, facilitating novel pathways to diagnosis and therapy.

SeminarNeuroscience

Revealing the molecular and cellular architecture of the nervous system

Gioele La Manno
EPFL, Lausanne, Switzerland
Apr 5, 2022
SeminarNeuroscience

An executive control approach to language production

Etienne Koechlin
École Normale Supérieure and INSERM, Paris, France
Apr 4, 2022

Language production is a form of behavior and as such involves executive control and the prefrontal function. The cognitive architecture of prefrontal executive function thus certainly plays an important role in shaping language production. In this talk, I will review the main features of the prefrontal executive function we have uncovered during the last two decades and I will discuss how these features may help understanding language production.

SeminarNeuroscience

Mapping the Dynamics of the Linear and 3D Genome of Single Cells in the Developing Brain

Longzhi Tan
Stanford
Mar 29, 2022

Three intimately related dimensions of the mammalian genome—linear DNA sequence, gene transcription, and 3D genome architecture—are crucial for the development of nervous systems. Changes in the linear genome (e.g., de novo mutations), transcriptome, and 3D genome structure lead to debilitating neurodevelopmental disorders, such as autism and schizophrenia. However, current technologies and data are severely limited: (1) 3D genome structures of single brain cells have not been solved; (2) little is known about the dynamics of single-cell transcriptome and 3D genome after birth; (3) true de novo mutations are extremely difficult to distinguish from false positives (DNA damage and/or amplification errors). Here, I filled in this longstanding technological and knowledge gap. I recently developed a high-resolution method—diploid chromatin conformation capture (Dip-C)—which resolved the first 3D structure of the human genome, tackling a longstanding problem dating back to the 1880s. Using Dip-C, I obtained the first 3D genome structure of a single brain cell, and created the first transcriptome and 3D genome atlas of the mouse brain during postnatal development. I found that in adults, 3D genome “structure types” delineate all major cell types, with high correlation between chromatin A/B compartments and gene expression. During development, both transcriptome and 3D genome are extensively transformed in the first month of life. In neurons, 3D genome is rewired across scales, correlated with gene expression modules, and independent of sensory experience. Finally, I examined allele-specific structure of imprinted genes, revealing local and chromosome-wide differences. More recently, I expanded my 3D genome atlas to the human and mouse cerebellum—the most consistently affected brain region in autism. I uncovered unique 3D genome rewiring throughout life, providing a structural basis for the cerebellum’s unique mode of development and aging. In addition, to accurately measure de novo mutations in a single cell, I developed a new method—multiplex end-tagging amplification of complementary strands (META-CS), which eliminates nearly all false positives by virtue of DNA complementarity. Using META-CS, I determined the true mutation spectrum of single human brain cells, free from chemical artifacts. Together, my findings uncovered an unknown dimension of neurodevelopment, and open up opportunities for new treatments for autism and other developmental disorders.

SeminarNeuroscience

The synaptic architecture of neuronal circuits underlying computation and cognition in the brain

Adrian Wanner
Paul Scherrer Institute, Switzerland
Mar 24, 2022
SeminarNeuroscience

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Jan 23, 2022

Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

SeminarNeuroscienceRecording

Why Some Intelligent Agents are Conscious

Hakwan Lau
RIKEN CBS
Dec 2, 2021

In this talk I will present an account of how an agent designed or evolved to be intelligent may come to enjoy subjective experiences. First, the agent is stipulated to be capable of (meta)representing subjective ‘qualitative’ sensory information, in the sense that it can easily assess how exactly similar a sensory signal is to all other possible sensory signals. This information is subjective in the sense that it concerns how the different stimuli can be distinguished by the agent itself, rather than how physically similar they are. For this to happen, sensory coding needs to satisfy sparsity and smoothness constraints, which are known to facilitate metacognition and generalization. Second, this qualitative information can under some specific circumstances acquire an ‘assertoric force’. This happens when a certain self-monitoring mechanism decides that the qualitative information reliably tracks the current state of the world, and informs a general symbolic reasoning system of this fact. I will argue that the having of subjective conscious experiences amounts to nothing more than having qualitative sensory information acquiring an assertoric status within one’s belief system. When this happens, the perceptual content presents itself as reflecting the state of the world right now, in ways that seem undeniably rational to the agent. At the same time, without effort, the agent also knows what the perceptual content is like, in terms of how subjectively similar it is to all other possible precepts. I will discuss the computational benefits of this architecture, for which consciousness might have arisen as a byproduct.

SeminarNeuroscienceRecording

NMC4 Short Talk: Different hypotheses on the role of the PFC in solving simple cognitive tasks

Nathan Cloos (he/him)
Université Catholique de Louvain
Dec 1, 2021

Low-dimensional population dynamics can be observed in neural activity recorded from the prefrontal cortex (PFC) of subjects performing simple cognitive tasks. Many studies have shown that recurrent neural networks (RNNs) trained on the same tasks can reproduce qualitatively these state space trajectories, and have used them as models of how neuronal dynamics implement task computations. The PFC is also viewed as a conductor that organizes the communication between cortical areas and provides contextual information. It is then not clear what is its role in solving simple cognitive tasks. Do the low-dimensional trajectories observed in the PFC really correspond to the computations that it performs? Or do they indirectly reflect the computations occurring within the cortical areas projecting to the PFC? To address these questions, we modelled cortical areas with a modular RNN and equipped it with a PFC-like cognitive system. When trained on cognitive tasks, this multi-system brain model can reproduce the low-dimensional population responses observed in neuronal activity as well as classical RNNs. Qualitatively different mechanisms can emerge from the training process when varying some details of the architecture such as the time constants. In particular, there is one class of models where it is the dynamics of the cognitive system that is implementing the task computations, and another where the cognitive system is only necessary to provide contextual information about the task rule as task performance is not impaired when preventing the system from accessing the task inputs. These constitute two different hypotheses about the causal role of the PFC in solving simple cognitive tasks, which could motivate further experiments on the brain.

SeminarNeuroscience

Neurocognitive mechanisms of proactive temporal attention: challenging oscillatory and cortico-centered models

Assaf Breska
Max Planck Institute for Biological Cybernetics, Tübingen
Dec 1, 2021

To survive in a rapidly dynamic world, the brain predicts the future state of the world and proactively adjusts perception, attention and action. A key to efficient interaction is to predict and prepare to not only “where” and “what” things will happen, but also to “when”. I will present studies in healthy and neurological populations that investigated the cognitive architecture and neural basis of temporal anticipation. First, influential ‘entrainment’ models suggest that anticipation in rhythmic contexts, e.g. music or biological motion, uniquely relies on alignment of attentional oscillations to external rhythms. Using computational modeling and EEG, I will show that cortical neural patterns previously associated with entrainment in fact overlap with interval timing mechanisms that are used in aperiodic contexts. Second, temporal prediction and attention have commonly been associated with cortical circuits. Studying neurological populations with subcortical degeneration, I will present data that point to a double dissociation between rhythm- and interval-based prediction in the cerebellum and basal ganglia, respectively, and will demonstrate a role for the cerebellum in attentional control of perceptual sensitivity in time. Finally, using EEG in neurodegenerative patients, I will demonstrate that the cerebellum controls temporal adjustment of cortico-striatal neural dynamics, and use computational modeling to identify cerebellar-controlled neural parameters. Altogether, these findings reveal functionally and neural context-specificity and subcortical contributions to temporal anticipation, revising our understanding of dynamic cognition.

SeminarNeuroscienceRecording

NMC4 Keynote: An all-natural deep recurrent neural network architecture for flexible navigation

Vivek Jayaraman
Janelia Research Campus
Nov 30, 2021

A wide variety of animals and some artificial agents can adapt their behavior to changing cues, contexts, and goals. But what neural network architectures support such behavioral flexibility? Agents with loosely structured network architectures and random connections can be trained over millions of trials to display flexibility in specific tasks, but many animals must adapt and learn with much less experience just to survive. Further, it has been challenging to understand how the structure of trained deep neural networks relates to their functional properties, an important objective for neuroscience. In my talk, I will use a combination of behavioral, physiological and connectomic evidence from the fly to make the case that the built-in modularity and structure of its networks incorporate key aspects of the animal’s ecological niche, enabling rapid flexibility by constraining learning to operate on a restricted parameter set. It is not unlikely that this is also a feature of many biological neural networks across other animals, large and small, and with and without vertebrae.

SeminarNeuroscienceRecording

Synapses, Shadows and Stress Contagion

Jaideep Bains
Professor, University of Calgary, Hotchkiss Brain Institute, Department of Physiology and Pharmacology
Nov 28, 2021

Survival is predicated on the ability of an organism to respond to stress. The reliability of this response is ensured by a synaptic architecture that is relatively inflexible (i.e. hard-wired). Our work has shown that in naive animals, synapses on CRH neurons in the paraventricular nucleus of the hypothalamus are very reluctant to modification. If animals are stressed, however, these synapses become willing to learn. This seminar will focus on mechanisms linking acute stress to metaplastic changes at glutamate synapses, and also show how stress, and these synaptic changes can be transmitted from one individual to another.

SeminarNeuroscience

Causal Reasoning: Its role in the architecture and development of the mind

Andreas Demetriou
University of Nicosia
Nov 23, 2021

The seminar will first outline the architecture of the human mind, specifying general and domain-specific mental processes. The place of causal reasoning and its relations with the other processes will be specified. Experimental, psychometric, developmental, and brain-based evidence will be summarized. The main message of the talk is that causal thought involves domain-specific core processes rooted in perception and served by special brain networks which capture interactions between objects. With development, causal reasoning is increasingly associated with a general abstraction system which generates general principles underlying inductive, analogical, and deductive reasoning and also heuristics for specifying causal relations. These associations are discussed in some detail. Possible implications for artificial intelligence and educational implications are also discussed.

SeminarNeuroscienceRecording

The wonders and complexities of brain microstructure: Enabling biomedical engineering studies combining imaging and models

Daniele Dini
Imperial College London
Nov 22, 2021

Brain microstructure plays a key role in driving the transport of drug molecules directly administered to the brain tissue as in Convection-Enhanced Delivery procedures. This study reports the first systematic attempt to characterize the cytoarchitecture of commissural, long association and projection fiber, namely: the corpus callosum, the fornix and the corona radiata. Ovine samples from three different subjects have been imaged using scanning electron microscope combined with focused ion beam milling. Particular focus has been given to the axons. For each tract, a 3D reconstruction of relatively large volumes (including a significant number of axons) has been performed. Namely, outer axonal ellipticity, outer axonal cross-sectional area and its relative perimeter have been measured. This study [1] provides useful insight into the fibrous organization of the tissue that can be described as composite material presenting elliptical tortuous tubular fibers, leading to a workflow to enable accurate simulations of drug delivery which include well-resolved microstructural features.  As a demonstration of the use of these imaging and reconstruction techniques, our research analyses the hydraulic permeability of two white matter (WM) areas (corpus callosum and fornix) whose three-dimensional microstructure was reconstructed starting from the acquisition of the electron microscopy images. Considering that the white matter structure is mainly composed of elongated and parallel axons we computed the permeability along the parallel and perpendicular directions using computational fluid dynamics [2]. The results show a statistically significant difference between parallel and perpendicular permeability, with a ratio about 2 in both the white matter structures analysed, thus demonstrating their anisotropic behaviour. This is in line with the experimental results obtained using perfusion of brain matter [3]. Moreover, we find a significant difference between permeability in corpus callosum and fornix, which suggests that also the white matter heterogeneity should be considered when modelling drug transport in the brain. Our findings, that demonstrate and quantify the anisotropic and heterogeneous character of the white matter, represent a fundamental contribution not only for drug delivery modelling but also for shedding light on the interstitial transport mechanisms in the extracellular space. These and many other discoveries will be discussed during the talk." "1. https://www.researchsquare.com/article/rs-686577/v1, 2. https://www.pnas.org/content/118/36/e2105328118, 3. https://ieeexplore.ieee.org/abstract/document/9198110

SeminarNeuroscienceRecording

Edge Computing using Spiking Neural Networks

Shirin Dora
Loughborough University
Nov 4, 2021

Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).

SeminarNeuroscienceRecording

Norse: A library for gradient-based learning in Spiking Neural Networks

Jens Egholm Pedersen
KTH Royal Institute of Technology
Nov 2, 2021

We introduce Norse: An open-source library for gradient-based training of spiking neural networks. In contrast to neuron simulators which mainly target computational neuroscientists, our library seamlessly integrates with the existing PyTorch ecosystem using abstractions familiar to the machine learning community. This has immediate benefits in that it provides a familiar interface, hardware accelerator support and, most importantly, the ability to use gradient-based optimization. While many parallel efforts in this direction exist, Norse emphasizes flexibility and usability in three ways. Users can conveniently specify feed-forward (convolutional) architectures, as well as arbitrarily connected recurrent networks. We strictly adhere to a functional and class-based API such that neuron primitives and, for example, plasticity rules composes. Finally, the functional core API ensures compatibility with the PyTorch JIT and ONNX infrastructure. We have made progress to support network execution on the SpiNNaker platform and plan to support other neuromorphic architectures in the future. While the library is useful in its present state, it also has limitations we will address in ongoing work. In particular, we aim to implement event-based gradient computation, using the EventProp algorithm, which will allow us to support sparse event-based data efficiently, as well as work towards support of more complex neuron models. With this library, we hope to contribute to a joint future of computational neuroscience and neuromorphic computing.

SeminarNeuroscienceRecording

Becoming what you smell: adaptive sensing in the olfactory system

Vijay Balasubramanian
University of Pennsylvania
Nov 2, 2021

I will argue that the circuit architecture of the early olfactory system provides an adaptive, efficient mechanism for compressing the vast space of odor mixtures into the responses of a small number of sensors. In this view, the olfactory sensory repertoire employs a disordered code to compress a high dimensional olfactory space into a low dimensional receptor response space while preserving distance relations between odors. The resulting representation is dynamically adapted to efficiently encode the changing environment of volatile molecules. I will show that this adaptive combinatorial code can be efficiently decoded by systematically eliminating candidate odorants that bind to silent receptors. The resulting algorithm for 'estimation by elimination' can be implemented by a neural network that is remarkably similar to the early olfactory pathway in the brain. Finally, I will discuss how diffuse feedback from the central brain to the bulb, followed by unstructured projections back to the cortex, can produce the convergence and divergence of the cortical representation of odors presented in shared or different contexts. Our theory predicts a relation between the diversity of olfactory receptors and the sparsity of their responses that matches animals from flies to humans. It also predicts specific deficits in olfactory behavior that should result from optogenetic manipulation of the olfactory bulb and cortex, and in some disease states.

ePoster

Non-feedforward architectures enable diverse multisensory computations

Marcus Ghosh, Dan Goodman

Bernstein Conference 2024

ePoster

Stochastic Process Model derived indicators of overfitting for deep architectures: Applicability to small sample recalibration of sEMG decoders

Stephan Lehmler, Muhammad Saif-Ur-Rehman, Ioannis Iossifidis

Bernstein Conference 2024

ePoster

How cerebellar architecture facilitates rapid online learning

COSYNE 2022

ePoster

Dual pathway architecture in songbirds boosts sensorimotor learning

COSYNE 2022

ePoster

Model architectures for choice-selective sequences in a navigation-based, evidence-accumulation task

COSYNE 2022

ePoster

Model architectures for choice-selective sequences in a navigation-based, evidence-accumulation task

COSYNE 2022

ePoster

Parallel functional architectures within a single dendritic tree

COSYNE 2022

ePoster

Parallel functional architectures within a single dendritic tree

COSYNE 2022

ePoster

An emergent low-rank neural architecture for manual interception of moving targets

Yating Liu, Siqi Li, Yongxiang Xiao, He Cui, Ni Ji

COSYNE 2025

ePoster

Signal propagation dynamics across the Drosophila hemi-brain connectome reveal parallel-hierarchical sensory-cognitive-motor architecture.

Ankit Kumar, Yao Xu, Kristofer Bouchard

COSYNE 2025

ePoster

Chronic exposure to glucocorticoids during critical neurodevelopmental periods leads to lasting shifts in neuronal type distribution and overall brain architecture

Ilknur Safak Demirel, Pia Giraudet, Malgorzata Grochowicz, Anthi C. Krontira, Leander Dony, Tim Schäfer, Elisabeth Binder, Cristiana Cruceanu

FENS Forum 2024

ePoster

Combinatorial architecture of circuit neuromodulation

Nikolas Karalis, Andreas Lüthi

FENS Forum 2024

ePoster

Detailed 3D architecture of adult human organs through new tissue clearing techniques

Héloïse Policet-Betend, Tomás Jordá-Siquier, Maeva Badré, Céline Brockmann, Jasmine Abdulcadir, Christophe Lamy

FENS Forum 2024

ePoster

Effects of early life stress on mouse sleep architecture and spindle activity

Mohsin Mohammed, Malvika Sharma, Janine Micahella Contreras, Dipesh Chaudhury

FENS Forum 2024

ePoster

FOXG1 controls cellular function and tissue architecture in 2D neural rosettes and 3D cerebral organoid models of epilepsy

Oliver Davis, Dwaipayan Adhya, Wai Kit Chan, John Mason, Andras Lakatos, Srinjan Basu

FENS Forum 2024

ePoster

Functional architecture of the clitoris

Maeva Badre, Christophe Lamy, Priscilla Soulié, Jasmine Abdulcadir, Marie-Luce Bochaton-Piallat, Céline Brockmann

FENS Forum 2024

ePoster

Functional architecture of dopamine neurons driving fear extinction learning

Ximena Icaria Salinas Hernandez, Daphne Zafiri, Torfi Sigurdsson, Sevil Duvarci

FENS Forum 2024

ePoster

The impact of memory consolidation on REM sleep architecture in rodents: An insight into phasic and tonic substates

Abdelrahman (Abdel) Rayan, Irene Navarro-Lobato, Adrian Aleman Zapata, Anumita Samanta, Lisa Genzel

FENS Forum 2024

ePoster

Increased Semaphorin 3A expression levels affect axonal elongation and dendritic architecture in human neural progenitors during the early stages of differentiation

Gabriella Ferretti, Alessia Romano, Rossana Sirabella, Sara Serafini, Thorsten Jürgen Maier, Carmela Matrone

FENS Forum 2024

ePoster

Linking the microarchitecture of neurotransmitter systems to large-scale MEG resting state networks

Felix Siebenhühner, J Matias Palva, Satu Palva

FENS Forum 2024

ePoster

Navigating through the entorhinal cortex: Combining single-cell electrophysiology and RNA sequencing to advance our knowledge on the neuronal architecture

Eliška Waloschková, Attila Ozsvar, Wen-Hsien Hou, Konstantin Khodosevich, Martin Hemberg, Jan Gorodkin, Stefan Seemann, Vanessa Hall

FENS Forum 2024

ePoster

Obligatory trajectory between tractable components in the neuropsychological architecture

Suzana Gjeci, Aida Quka, Meri Papajani, Valmira Skendi, Eni Reka, Fatime Elezi, Florian Dashi

FENS Forum 2024

ePoster

The proteomic architecture of the synaptic engram supporting context memory

Biswajit Moharana, Panthea Nemat, Renee Pullen, Anna Gradl, Remco Klaassen, Cora Chadick, Rolinka van der Loo, Yvonne Gouwenberg, Frank Koopmans, Juan Garcia Vallejo, Michel van den Over, August Smit, Priyanka Rao-Ruiz

FENS Forum 2024

ePoster

Reconstructing the neural architecture of the cnidarian Nematostella vectensis to understand evolution of the nervous system

Abhishek Mishra, Alison Cole, Linda Kloẞ, Ulrich Technau

FENS Forum 2024

ePoster

Region-specific interneuron cytoarchitecture of the mouse cerebral cortex

Eleanor Paul, Elena Serafeimidou Pouliou, Giovanni Diana, Oscar Marin

FENS Forum 2024

ePoster

Rhythmicity of neuronal oscillations delineates their cortical and spectral architecture

Vladislav Myrov, Felix Siebenhühner, Joonas J Juvonen, Gabriele Arnulfo, Satu Palva, Matias Palva

FENS Forum 2024

ePoster

Rodent propionic acid model of autism: Synaptic architecture of the hippocampus and prefrontal cortex

Mzia Zhvania, Nadezhda Japaridze, Giorgi Lobzhanidze, Nino Pochkhidze, Pikria Khomasuridze

FENS Forum 2024

ePoster

A role for interoceptive vGluT2-expressing neurons in the jugular-nodose ganglion of the left vagus nerve in the regulation of sleep architecture and spectral composition

Najma Cherrad, Georgios Foustoukos, Alejandro Osorio-Forero, Romain Cardis, Nadine Eliasson, Yann Emmenegger, Laura Fernandez, Paul Franken, Anita Lüthi

FENS Forum 2024

ePoster

SATB2 organizes the 3D genome architecture of cognition in cortical neurons

Nico Wahl, Sergio Espeso-Gil, Paola Chietera, Amelie Nagel, Aodán Laighneach, Derek W. Morris, Prashanth Rajarajan, Schahram Akbarian, Georg Dechant*, Galina Apostolova*

FENS Forum 2024

ePoster

Sex-dependent BDNF-mediated effects of Fingolimod on the architecture of mouse hippocampal neurons

Aiswaria Lekshmi Kannan, Charlotte Tacke, Martin Korte, Marta Zagrebelsky

FENS Forum 2024

ePoster

Sleep architecture in C57BL/6 mice predicts anxious phenotypes: Towards a first robust animal model for postoperative delirium

Alp Altunkaya, Kim Michelle Mengel, Buket Solak, Alice Caterina Pasquini, Annabelle Bahmann, Matthias Kreuzer, Gerhard Schneider, Thomas Fenzl

FENS Forum 2024

ePoster

Solving cell-type specific 3-dimensional genome architecture in heterogeneous populations

Rikke Rejnholdt Jensen, Joaquim Ollé, Navneet A. Vasistha, Konstantin Khodosevich, Nils Krietenstein

FENS Forum 2024

ePoster

Spontaneous mesoscale calcium dynamics reflect the development of the modular functional architecture of the mouse cortex

Davide Warm, Davide Bassetti, Levente Gellèrt, Jenq-Wei Yang, Heiko J. Luhmann, Anne Sinning

FENS Forum 2024

ePoster

Synaptic and dendritic architecture of different types of hippocampal somatostatin interneurons

Áron Orosz, Virág Takács, Zsuzsanna Bardóczi, Abel Major, Luca Tar, Berki Péter, Márton I. Mayer, Hunor Sebők, Luca Zsolt, Katalin E. Sos, Szabolcs Káli, Tamás F. Freund

FENS Forum 2024

ePoster

Using cryo-electron tomography (cryo-ET) to study the molecular architecture of synapses

Thanh Thao Do, Arsen Petrovic, Rubén Fernández-Busnadiego

FENS Forum 2024

ePoster

Visualization of the intact cochlea and its architecture by newly refined light sheet fluorescence microscopy

Lennart Roos, Aleyna M. Diniz, Mostafa Aakthe, Anupriya Thirumalai, Koert Elisabeth, Jakob Neef, Bettina J. Wolf, Jan Huisken, Tobias Moser

FENS Forum 2024

ePoster

Where are the neural architectures? The curse of structural flatness in neural network modelling

Declan J Collins

Neuromatch 5