← Back

Robustness

Topic spotlight
TopicWorld Wide

robustness

Discover seminars, jobs, and research tagged with robustness across World Wide.
45 curated items30 Seminars8 ePosters7 Positions
Updated 1 day ago
45 items · robustness
45 results
Position

Pedro Goncalves

VIB-Neuroelectronics Research Flanders
Leuven
Dec 5, 2025

The Gonçalves lab is a recently founded research group at the Neuro-Electronics Flanders (NERF), Belgium, co-affiliated with the VIB Center for AI & Computational Biology. We are currently exploring a range of exciting topics at the intersection between computational neuroscience and probabilistic machine learning. In particular, we develop machine learning methods to derive mechanistic insights from neuroscience data and apply them to challenging neuroscience problems: from the retrieval of complex input-output functions of biophysically-detailed single neurons to the full characterisation of mechanisms of compensation for perturbations in neural circuits. We work in an interdisciplinary, collaborative, and supportive work environment, which emphasizes diversity and inclusion. NERF is a joint research initiative by imec, VIB and KU Leuven. We are looking for a PhD and a postdoc candidates interested in developing machine learning methods and applying them to neuroscience problems. There will be flexibility to customise the project and ample opportunities to collaborate with top experimental and theoretical partners locally and internationally. More details about the positions and the lab can be found at https://jobso.id/hz2b https://jobso.id/hz2e

Position

N/A

Donders Centre for Cognition, Donders Institute for Brain, Cognition and Behaviour, School of Artificial Intelligence at Radboud University Nijmegen
Radboud University Nijmegen
Dec 5, 2025

The AI Department of the Donders Centre for Cognition (DCC), embedded in the Donders Institute for Brain, Cognition and Behaviour, and the School of Artificial Intelligence at Radboud University Nijmegen are looking for a researcher in reinforcement learning with an emphasis on safety and robustness, an interest in natural computing as well as in applications in neurotechnology and other domains such as robotics, healthcare and/or sustainability. You will be expected to perform top-quality research in (deep) reinforcement learning, actively contribute to the DBI2 consortium, interact and collaborate with other researchers and specialists in academia and/or industry, and be an inspiring member of our staff with excellent communication skills. You are also expected to engage with students through teaching and master projects not exceeding 20% of your time.

Position

Tina Eliassi-Rad

RADLAB, Northeastern University’s Network Science Institute
Northeastern University’s Network Science Institute
Dec 5, 2025

The RADLAB at Northeastern University’s Network Science Institute has two postdoctoral positions available. We are looking for exceptional candidates who are interested in the following programs: 1. Trustworthy Network Science: As the use of machine learning in network science grows, so do the issues of stability, robustness, explainability, transparency, and fairness, to name a few. We address issues of trustworthy ML in network science. 2. Just Machine Learning: Machine learning systems are not islands. They are part of broader complex systems. To understand and mitigate the risks and harms of using machine learning, we remove our optimization blinders and study the broader complex systems in which machine learning systems operate.

PositionArtificial Intelligence

N/A

Dalle Molle Institute for Artificial Intelligence (IDSIA)
Lugano, Switzerland
Dec 5, 2025

The PhD research focuses on the fairness, explainability, and robustness of machine learning systems within the framework of causal counterfactual analysis using formalisms from probabilistic graphical models, probabilistic circuits, and structural causal models.

Position

Thomas Krak

Uncertainty in Artificial Intelligence (UAI) group, Data and AI (DAI) cluster, Eindhoven University of Technology
Eindhoven University of Technology
Dec 5, 2025

The Uncertainty in Artificial Intelligence (UAI) group is looking for a highly motivated and skilled PhD candidate to work in the area of probabilistic machine learning. The position is fully funded for a term of four years. The research direction will be determined together with the successful candidate and in line with the NWO Perspectief Project Personalised Care in Oncology (www.personalisedcareinoncology.nl). The research topics may include, but are not restricted to: Probabilistic graphical models (Markov, Bayesian, credal networks), Causality: Theory and application, Cautious AI, including imprecise probabilities, Robust stochastic processes, Tractable models and decision-making, Online/continual learning with evolving data.

Position

Kerstin Ritter

Hertie Institute for AI in Brain Health, Medical Faculty of the University of Tübingen
Tübingen, Germany
Dec 5, 2025

The Department of Machine Learning for Clinical Neuroscience is currently recruiting PhD candidates and Postdocs. We develop advanced machine and deep learning models to analyze diverse clinical data, including neuroimaging, psychometric, clinical, smartphone, and omics datasets. While focusing on methodological challenges (explainability, robustness, multimodal data integration, causality etc.), the main goal is to enhance early diagnosis, predict disease progression, and personalize treatment for neurological and psychiatric diseases in diverse clinical settings. We offer an exciting and supportive environment with access to state-of-the-art compute facilities, mentoring and career advice through experienced faculty. Hertie AI closely collaborates with the world-class AI ecosystem in Tübingen (e.g. Cyber Valley, Cluster of Excellence “Machine Learning in Science”, Tübingen AI Center).

SeminarNeuroscience

Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics

Shaul Druckmann
Stanford department of Neurobiology and department of Psychiatry and Behavioral Sciences
Apr 22, 2025

Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.

SeminarNeuroscience

Exploring the cerebral mechanisms of acoustically-challenging speech comprehension - successes, failures and hope

Alexis Hervais-Adelman
University of Geneva
May 20, 2024

Comprehending speech under acoustically challenging conditions is an everyday task that we can often execute with ease. However, accomplishing this requires the engagement of cognitive resources, such as auditory attention and working memory. The mechanisms that contribute to the robustness of speech comprehension are of substantial interest in the context of hearing mild to moderate hearing impairment, in which affected individuals typically report specific difficulties in understanding speech in background noise. Although hearing aids can help to mitigate this, they do not represent a universal solution, thus, finding alternative interventions is necessary. Given that age-related hearing loss (“presbycusis”) is inevitable, developing new approaches is all the more important in the context of aging populations. Moreover, untreated hearing loss in middle age has been identified as the most significant potentially modifiable predictor of dementia in later life. I will present research that has used a multi-methodological approach (fMRI, EEG, MEG and non-invasive brain stimulation) to try to elucidate the mechanisms that comprise the cognitive “last mile” in speech acousticallychallenging speech comprehension and to find ways to enhance them.

SeminarNeuroscienceRecording

Event-related frequency adjustment (ERFA): A methodology for investigating neural entrainment

Mattia Rosso
Ghent University, IPEM Institute for Systematic Musicology
Nov 28, 2023

Neural entrainment has become a phenomenon of exceptional interest to neuroscience, given its involvement in rhythm perception, production, and overt synchronized behavior. Yet, traditional methods fail to quantify neural entrainment due to a misalignment with its fundamental definition (e.g., see Novembre and Iannetti, 2018; Rajandran and Schupp, 2019). The definition of entrainment assumes that endogenous oscillatory brain activity undergoes dynamic frequency adjustments to synchronize with environmental rhythms (Lakatos et al., 2019). Following this definition, we recently developed a method sensitive to this process. Our aim was to isolate from the electroencephalographic (EEG) signal an oscillatory component that is attuned to the frequency of a rhythmic stimulation, hypothesizing that the oscillation would adaptively speed up and slow down to achieve stable synchronization over time. To induce and measure these adaptive changes in a controlled fashion, we developed the event-related frequency adjustment (ERFA) paradigm (Rosso et al., 2023). A total of twenty healthy participants took part in our study. They were instructed to tap their finger synchronously with an isochronous auditory metronome, which was unpredictably perturbed by phase-shifts and tempo-changes in both positive and negative directions across different experimental conditions. EEG was recorded during the task, and ERFA responses were quantified as changes in instantaneous frequency of the entrained component. Our results indicate that ERFAs track the stimulus dynamics in accordance with the perturbation type and direction, preferentially for a sensorimotor component. The clear and consistent patterns confirm that our method is sensitive to the process of frequency adjustment that defines neural entrainment. In this Virtual Journal Club, the discussion of our findings will be complemented by methodological insights beneficial to researchers in the fields of rhythm perception and production, as well as timing in general. We discuss the dos and don’ts of using instantaneous frequency to quantify oscillatory dynamics, the advantages of adopting a multivariate approach to source separation, the robustness against the confounder of responses evoked by periodic stimulation, and provide an overview of domains and concrete examples where the methodological framework can be applied.

SeminarNeuroscienceRecording

Nonlinear computations in spiking neural networks through multiplicative synapses

M. Nardin
IST Austria
Nov 8, 2022

The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.

SeminarNeuroscienceRecording

Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics

Robert Guetig
Charité – Universitätsmedizin Berlin & BIH
Mar 8, 2022

Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.

SeminarNeuroscience

How does a neuron decide when and where to make a synapse?

Peter R. Hiesinger
Free University, Berlin, Germany
Feb 15, 2022

Precise synaptic connectivity is a prerequisite for the function of neural circuits, yet individual neurons, taken out of their developmental context, readily form unspecific synapses. How does genetically encoded brain wiring deal with this apparent contradiction? Brain wiring is a developmental growth process that is not only characterized by precision, but also flexibility and robustness. As in any other growth process, cellular interactions are restricted in space and time. Correspondingly, molecular and cellular interactions are restricted to those that 'get to see' each other during development. This seminar will explore the question how neurons decide when and where to make synapses using the Drosophila visual system as a model. New findings reveal that pattern formation during growth and the kinetics of live neuronal interactions restrict synapse formation and partner choice for neurons that are not otherwise prevented from making incorrect synapses in this system. For example, cell biological mechanisms like autophagy as well as developmental temperature restrict inappropriate partner choice through a process of kinetic exclusion that critically contributes to wiring specificity. The seminar will explore these and other neuronal strategies when and where to make synapses during developmental growth that contribute to precise, flexible and robust outcomes in brain wiring.

SeminarNeuroscienceRecording

Structure, Function, and Learning in Distributed Neuronal Networks

SueYeon Chung
Flatiron Institute/NYU
Jan 25, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.

SeminarNeuroscience

What does the primary visual cortex tell us about object recognition?

Tiago Marques
MIT
Jan 23, 2022

Object recognition relies on the complex visual representations in cortical areas at the top of the ventral stream hierarchy. While these are thought to be derived from low-level stages of visual processing, this has not been shown, yet. Here, I describe the results of two projects exploring the contributions of primary visual cortex (V1) processing to object recognition using artificial neural networks (ANNs). First, we developed hundreds of ANN-based V1 models and evaluated how their single neurons approximate those in the macaque V1. We found that, for some models, single neurons in intermediate layers are similar to their biological counterparts, and that the distributions of their response properties approximately match those in V1. Furthermore, we observed that models that better matched macaque V1 were also more aligned with human behavior, suggesting that object recognition is derived from low-level. Motivated by these results, we then studied how an ANN’s robustness to image perturbations relates to its ability to predict V1 responses. Despite their high performance in object recognition tasks, ANNs can be fooled by imperceptibly small, explicitly crafted perturbations. We observed that ANNs that better predicted V1 neuronal activity were also more robust to adversarial attacks. Inspired by this, we developed VOneNets, a new class of hybrid ANN vision models. Each VOneNet contains a fixed neural network front-end that simulates primate V1 followed by a neural network back-end adapted from current computer vision models. After training, VOneNets were substantially more robust, outperforming state-of-the-art methods on a set of perturbations. While current neural network architectures are arguably brain-inspired, these results demonstrate that more precisely mimicking just one stage of the primate visual system leads to new gains in computer vision applications and results in better models of the primate ventral stream and object recognition behavior.

SeminarNeuroscience

Brain chart for the human lifespan

Richard Bethlehem
Director of Neuroimaging, Autism Research Centre, University of Cambridge, United Kingdom
Jan 18, 2022

Over the past few decades, neuroimaging has become a ubiquitous tool in basic research and clinical studies of the human brain. However, no reference standards currently exist to quantify individual differences in neuroimaging metrics over time, in contrast to growth charts for anthropometric traits such as height and weight. Here, we built an interactive resource to benchmark brain morphology, www.brainchart.io, derived from any current or future sample of magnetic resonance imaging (MRI) data. With the goal of basing these reference charts on the largest and most inclusive dataset available, we aggregated 123,984 MRI scans from 101,457 participants aged from 115 days post-conception through 100 postnatal years, across more than 100 primary research studies. Cerebrum tissue volumes and other global or regional MRI metrics were quantified by centile scores, relative to non-linear trajectories of brain structural changes, and rates of change, over the lifespan. Brain charts identified previously unreported neurodevelopmental milestones; showed high stability of individual centile scores over longitudinal assessments; and demonstrated robustness to technical and methodological differences between primary studies. Centile scores showed increased heritability compared to non-centiled MRI phenotypes, and provided a standardised measure of atypical brain structure that revealed patterns of neuroanatomical variation across neurological and psychiatric disorders. In sum, brain charts are an essential first step towards robust quantification of individual deviations from normative trajectories in multiple, commonly-used neuroimaging phenotypes. Our collaborative study proves the principle that brain charts are achievable on a global scale over the entire lifespan, and applicable to analysis of diverse developmental and clinical effects on human brain structure.

SeminarNeuroscienceRecording

Self-organized formation of discrete grid cell modules from smooth gradients

Sarthak Chandra
Fiete lab, MIT
Nov 2, 2021

Modular structures in myriad forms — genetic, structural, functional — are ubiquitous in the brain. While modularization may be shaped by genetic instruction or extensive learning, the mechanisms of module emergence are poorly understood. Here, we explore complementary mechanisms in the form of bottom-up dynamics that push systems spontaneously toward modularization. As a paradigmatic example of modularity in the brain, we focus on the grid cell system. Grid cells of the mammalian medial entorhinal cortex (mEC) exhibit periodic lattice-like tuning curves in their encoding of space as animals navigate the world. Nearby grid cells have identical lattice periods, but at larger separations along the long axis of mEC the period jumps in discrete steps so that the full set of periods cluster into 5-7 discrete modules. These modules endow the grid code with many striking properties such as an exponential capacity to represent space and unprecedented robustness to noise. However, the formation of discrete modules is puzzling given that biophysical properties of mEC stellate cells (including inhibitory inputs from PV interneurons, time constants of EPSPs, intrinsic resonance frequency and differences in gene expression) vary smoothly in continuous topographic gradients along the mEC. How does discreteness in grid modules arise from continuous gradients? We propose a novel mechanism involving two simple types of lateral interaction that leads a continuous network to robustly decompose into discrete functional modules. We show analytically that this mechanism is a generic multi-scale linear instability that converts smooth gradients into discrete modules via a topological “peak selection” process. Further, this model generates detailed predictions about the sequence of adjacent period ratios, and explains existing grid cell data better than existing models. Thus, we contribute a robust new principle for bottom-up module formation in biology, and show that it might be leveraged by grid cells in the brain.

SeminarNeuroscienceRecording

Understanding the role of neural heterogeneity in learning

Nicolas Perez-Nieves
Imperial College London
Nov 1, 2021

The brain has a hugely diverse and heterogeneous nature. The exact role of heterogeneity has been relatively little explored as most neural models tend to be largely homogeneous. We trained spiking neural networks with varying degrees of heterogeneity on complex real-world tasks and found that heterogeneity resulted in more stable and robust training and improved training performance, especially for tasks with a higher temporal structure. Moreover, the optimal distribution of parameters found by training was found to be similar to experimental observations. These findings suggest that heterogeneity is not simply a result of noisy biological processes, but it may play a crucial role for learning in complex, changing environments.

SeminarPhysics of LifeRecording

Making connections: how epithelial tissues guarantee folding

Hannah Yevick
MIT
Oct 24, 2021

Tissue folding is a ubiquitous shape change event during development whereby a cell sheet bends into a curved 3D structure. This mechanical process is remarkably robust, and the correct final form is almost always achieved despite internal fluctuations and external perturbations inherent in living systems. While many genetic and molecular strategies that lead to robust development have been established, much less is known about how mechanical patterns and movements are ensured at the population level. I will describe how quantitative imaging, physical modeling and concepts from network science can uncover collective interactions that govern tissue patterning and shape change. Actin and myosin are two important cytoskeletal proteins involved in the force generation and movement of cells. Both parts of this talk will be about the spontaneous organization of actomyosin networks and their role in collective tissue dynamics. First, I will present how out-of-plane curvature can trigger the global alignment of actin fibers and a novel transition from collective to individual cell migration in culture. I will then describe how tissue-scale cytoskeletal patterns can guide tissue folding in the early fruit fly embryo. I will show that actin and myosin organize into a network that spans a domain of the embryo that will fold. Redundancy in this supracellular network encodes the tissue’s intrinsic robustness to mechanical and molecular perturbations during folding.

SeminarNeuroscience

Co-tuned, balanced excitation and inhibition in olfactory memory networks

Claire Meissner-Bernard
Friedrich lab, Friedrich Miescher Institute, Basel, Switzerland
May 19, 2021

Odor memories are exceptionally robust and essential for the survival of many species. In rodents, the olfactory cortex shows features of an autoassociative memory network and plays a key role in the retrieval of olfactory memories (Meissner-Bernard et al., 2019). Interestingly, the telencephalic area Dp, the zebrafish homolog of olfactory cortex, transiently enters a state of precise balance during the presentation of an odor (Rupprecht and Friedrich, 2018). This state is characterized by large synaptic conductances (relative to the resting conductance) and by co-tuning of excitation and inhibition in odor space and in time at the level of individual neurons. Our aim is to understand how this precise synaptic balance affects memory function. For this purpose, we build a simplified, yet biologically plausible spiking neural network model of Dp using experimental observations as constraints: besides precise balance, key features of Dp dynamics include low firing rates, odor-specific population activity and a dominance of recurrent inputs from Dp neurons relative to afferent inputs from neurons in the olfactory bulb. To achieve co-tuning of excitation and inhibition, we introduce structured connectivity by increasing connection probabilities and/or strength among ensembles of excitatory and inhibitory neurons. These ensembles are therefore structural memories of activity patterns representing specific odors. They form functional inhibitory-stabilized subnetworks, as identified by the “paradoxical effect” signature (Tsodyks et al., 1997): inhibition of inhibitory “memory” neurons leads to an increase of their activity. We investigate the benefits of co-tuning for olfactory and memory processing, by comparing inhibitory-stabilized networks with and without co-tuning. We find that co-tuned excitation and inhibition improves robustness to noise, pattern completion and pattern separation. In other words, retrieval of stored information from partial or degraded sensory inputs is enhanced, which is relevant in light of the instability of the olfactory environment. Furthermore, in co-tuned networks, odor-evoked activation of stored patterns does not persist after removal of the stimulus and may therefore subserve fast pattern classification. These findings provide valuable insights into the computations performed by the olfactory cortex, and into general effects of balanced state dynamics in associative memory networks.

SeminarPhysics of LifeRecording

Frustrated Self-Assembly of Non-Euclidean Crystals of Nanoparticles

Xioaming Mao
University of Michigan
Apr 13, 2021

Self-organized complex structures in nature, e.g., viral capsids, hierarchical biopolymers, and bacterial flagella, offer efficiency, adaptability, robustness, and multi-functionality. Can we program the self-assembly of three-dimensional (3D) complex structures using simple building blocks, and reach similar or higher level of sophistication in engineered materials? Here we present an analytic theory for the self-assembly of polyhedral nanoparticles (NPs) based on their crystal structures in non-Euclidean space. We show that the unavoidable geometrical frustration of these particle shapes, combined with competing attractive and repulsive interparticle interactions, lead to controllable self-assembly of structures of complex order. Applying this theory to tetrahedral NPs, we find high-yield and enantiopure self-assembly of helicoidal ribbons, exhibiting qualitative agreement with experimental observations. We expect that this theory will offer a general framework for the self-assembly of simple polyhedral building blocks into rich complex morphologies with new material capabilities such as tunable optical activity, essential for multiple emerging technologies.

SeminarNeuroscience

Neural circuit parameter variability, robustness, and homeostasis

Astrid Prinz
Emory University
Mar 11, 2021

Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.

SeminarPhysics of Life

Opposing motors provide mechanical and functional robustness in the human spindle

Sophie Dumont
University of California, San Francisco
Feb 11, 2021
SeminarNeuroscience

Top-down Modulation in Human Visual Cortex

Mohamed Abdelhack
Washington University in St. Louis
Dec 16, 2020

Human vision flaunts a remarkable ability to recognize objects in the surrounding environment even in the absence of complete visual representation of these objects. This process is done almost intuitively and it was not until scientists had to tackle this problem in computer vision that they noticed its complexity. While current advances in artificial vision systems have made great strides exceeding human level in normal vision tasks, it has yet to achieve a similar robustness level. One cause of this robustness is the extensive connectivity that is not limited to a feedforward hierarchical pathway similar to the current state-of-the-art deep convolutional neural networks but also comprises recurrent and top-down connections. They allow the human brain to enhance the neural representations of degraded images in concordance with meaningful representations stored in memory. The mechanisms by which these different pathways interact are still not understood. In this seminar, studies concerning the effect of recurrent and top-down modulation on the neural representations resulting from viewing blurred images will be presented. Those studies attempted to uncover the role of recurrent and top-down connections in human vision. The results presented challenge the notion of predictive coding as a mechanism for top-down modulation of visual information during natural vision. They show that neural representation enhancement (sharpening) appears to be a more dominant process of different levels of visual hierarchy. They also show that inference in visual recognition is achieved through a Bayesian process between incoming visual information and priors from deeper processing regions in the brain.

SeminarNeuroscienceRecording

Motor Cortex in Theory and Practice

Mark Churchland
Columbia University, New York
Nov 29, 2020

A central question in motor physiology has been whether motor cortex activity resembles muscle activity, and if not, why not? Over fifty years, extensive observations have failed to provide a concise answer, and the topic remains much debated. To provide a different perspective, we employed a novel behavioral paradigm that affords extensive comparison between time-evolving neural and muscle activity. Single motor-cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ’trajectory tangling’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of trajectory tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low tangling confers noise robustness. Remarkably, we were able to predict motor cortex activity from muscle activity alone, by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low tangling. Our results argue that motor cortex embeds descending commands in additional structure that ensure low tangling, and thus noise-robustness. The dominant structure in motor cortex may thus serve not a representational function (encoding specific variables) but a computational function: ensuring that outgoing commands can be generated reliably. Our results establish the utility of an emerging approach: understanding the structure of neural activity based on properties of population geometry that flow from normative principles such as noise robustness.

SeminarNeuroscience

Leveraging olfaction to understand how the brain and the body generate social behavior

Lisa Stowers
Scripps research institute
Nov 29, 2020

Courtship behavior is an innate model for many types of brain computations including sensory detection, learning and memory, and internal state modulation. Despite the robustness of the behavior, we have little understanding of the underlying neural circuits and mechanisms. The Stowers’ lab is leveraging the ability of specialized olfactory cues, pheromones, to specifically activate and therefore identify and study courtship circuits in the mouse. We are interested in identifying general circuit principles (specific brain nodes and information flow) that are common to all individuals, in order to additionally study how experience, gender, age, and internal state modulate and personalize behavior. We are solving two parallel sensory to motor courtship circuits, that promote social vocal calling and scent marking, to study information processing of behavior as a complete unit instead of restricting focus to a single brain region. We expect comparing and contrasting the coding logic of two courtship motor behaviors will begin to shed light on general principles of how the brain senses context, weighs experience and responds to internal state to ultimately decide appropriate action.

SeminarNeuroscienceRecording

Multitask performance humans and deep neural networks

Christopher Summerfield
University of Oxford
Nov 24, 2020

Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.

SeminarPhysics of LifeRecording

Building a synthetic cell: Understanding the clock design and function

Qiong Yang
U Michigan - Ann Arbor
Oct 19, 2020

Clock networks containing the same central architectures may vary drastically in their potential to oscillate, raising the question of what controls robustness, one of the essential functions of an oscillator. We computationally generate an atlas of oscillators and found that, while core topologies are critical for oscillations, local structures substantially modulate the degree of robustness. Strikingly, two local structures, incoherent and coherent inputs, can modify a core topology to promote and attenuate its robustness, additively. The findings underscore the importance of local modifications to the performance of the whole network. It may explain why auxiliary structures not required for oscillations are evolutionary conserved. We also extend this computational framework to search hidden network motifs for other clock functions, such as tunability that relates to the capabilities of a clock to adjust timing to external cues. Experimentally, we developed an artificial cell system in water-in-oil microemulsions, within which we reconstitute mitotic cell cycles that can perform self-sustained oscillations for 30 to 40 cycles over multiple days. The oscillation profiles, such as period, amplitude, and shape, can be quantitatively varied with the concentrations of clock regulators, energy levels, droplet sizes, and circuit design. Such innate flexibility makes it crucial to studying clock functions of tunability and stochasticity at the single-cell level. Combined with a pressure-driven multi-channel tuning setup and long-term time-lapse fluorescence microscopy, this system enables a high-throughput exploration in multi-dimension continuous parameter space and single-cell analysis of the clock dynamics and functions. We integrate this experimental platform with mathematical modeling to elucidate the topology-function relation of biological clocks. With FRET and optogenetics, we also investigate spatiotemporal cell-cycle dynamics in both homogeneous and heterogeneous microenvironments by reconstructing subcellular compartments.

SeminarNeuroscience

Neural mechanisms of proprioception and motor control in Drosophila

John Tuthill
University of Washington
May 12, 2020

Animals rely on an internal sense of body position and movement to effectively control motor behaviour. This sense of proprioception is mediated by diverse populations of internal mechanosensory neurons distributed throughout the body. My lab is trying to understand how proprioceptive stimuli are detected by sensory neurons, integrated and transformed in central circuits, and used to guide motor output. We approach these questions using genetic tools, in vivo two-photon imaging, and patch-clamp electrophysiology in Drosophila. We recently found that the axons of fly leg proprioceptors are organized into distinct functional projections that contain topographic representations of specific kinematic features: one group of axons encodes tibia position, another encodes movement direction, and a third encodes bidirectional movement and vibration frequency. Whole-cell recordings from downstream neurons reveal that position, movement, and directional information remain segregated in central circuits. These feedback signals then converge upon motor neurons that control leg muscles during walking. Overall, our findings reveal how a low-dimensional stimulus – the angle of a single leg joint – is encoded by a diverse population of mechanosensory neurons. Specific proprioceptive parameters are initially processed by parallel pathways, but are ultimately integrated to influence motor output. This architecture may help to maximize information transmission, processing speed, and robustness, which are critical for feedback control of the limbs during adaptive locomotion.

ePoster

Beyond accuracy: robustness and generalization properties of biologically plausible learning rules

COSYNE 2022

ePoster

Multiple bumps can enhance robustness to noise in continuous attractor networks

COSYNE 2022

ePoster

Multiple bumps can enhance robustness to noise in continuous attractor networks

COSYNE 2022

ePoster

Predictive dynamics improve noise robustness in a deep network model of the human auditory system

Ching Fang, Erica Shook, Justin Buck, Guillermo Horga

COSYNE 2023

ePoster

Robustness of PFC networks under inter and intra-hemispheric patterned microstimulation perturbations

Joana Soldado Magraner, Yuki Minai, Matthew Smith, Byron Yu

COSYNE 2023

ePoster

Enhancing Vision Robustness to Adversarial Attacks through Foveal-Peripheral and Saccadic Mechanisms

Jiayang Liu, Daniel Tso, Garrett Katz, Qinru Qiu

COSYNE 2025

ePoster

Robustness and evolvability in a model of a pattern recognition network

Daesung Cho, Jan Clemens

FENS Forum 2024

ePoster

Reconstruction-guided attention improves the robustness and shape processing of neural networks

Seoyoung Ahn

Neuromatch 5