Latest

SeminarNeuroscience

sensorimotor control, mouvement, touch, EEG

Marieva Vlachou
Institut des Sciences du Mouvement Etienne Jules Marey, Aix-Marseille Université/CNRS, France
Dec 19, 2025

Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 10, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

SeminarNeuroscience

OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis

Michael Demidenko
Stanford University
Aug 1, 2025

In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.

SeminarNeuroscience

“Development and application of gaze control models for active perception”

Prof. Bert Shi
Professor of Electronic and Computer Engineering at the Hong Kong University of Science and Technology (HKUST)
Jun 12, 2025

Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.

SeminarNeuroscience

Learning and Memory

Nicolas Brunel, Ashok Litwin-Kumar, Julijana Gjeorgieva
Duke University; Columbia University; Technical University Munich
Nov 29, 2024

This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.

SeminarNeuroscience

Feedback-induced dispositional changes in risk preferences

Stefano Palmintieri
Institut National de la Santé et de la Recherche Médicale & École Normale Supérieure, Paris
Oct 29, 2024

Contrary to the original normative decision-making standpoint, empirical studies have repeatedly reported that risk preferences are affected by the disclosure of choice outcomes (feedback). Although no consensus has yet emerged regarding the properties and mechanisms of this effect, a widespread and intuitive hypothesis is that repeated feedback affects risk preferences by means of a learning effect, which alters the representation of subjective probabilities. Here, we ran a series of seven experiments (N= 538), tailored to decipher the effects of feedback on risk preferences. Our results indicate that the presence of feedback consistently increases risk-taking, even when the risky option is economically less advantageous. Crucially, risk-taking increases just after the instructions, before participants experience any feedback. These results challenge the learning account, and advocate for a dispositional effect, induced by the mere anticipation of feedback information. Epistemic curiosity and regret avoidance may drive this effect in partial and complete feedback conditions, respectively.

SeminarNeuroscience

Personalized medicine and predictive health and wellness: Adding the chemical component

Anne Andrews
University of California
Jul 9, 2024

Wearable sensors that detect and quantify biomarkers in retrievable biofluids (e.g., interstitial fluid, sweat, tears) provide information on human dynamic physiological and psychological states. This information can transform health and wellness by providing actionable feedback. Due to outdated and insufficiently sensitive technologies, current on-body sensing systems have capabilities limited to pH, and a few high-concentration electrolytes, metabolites, and nutrients. As such, wearable sensing systems cannot detect key low-concentration biomarkers indicative of stress, inflammation, metabolic, and reproductive status.  We are revolutionizing sensing. Our electronic biosensors detect virtually any signaling molecule or metabolite at ultra-low levels. We have monitored serotonin, dopamine, cortisol, phenylalanine, estradiol, progesterone, and glucose in blood, sweat, interstitial fluid, and tears. The sensors are based on modern nanoscale semiconductor transistors that are straightforwardly scalable for manufacturing. We are developing sensors for >40 biomarkers for personalized continuous monitoring (e.g., smartwatch, wearable patch) that will provide feedback for treating chronic health conditions (e.g., perimenopause, stress disorders, phenylketonuria). Moreover, our sensors will enable female fertility monitoring and the adoption of more healthy lifestyles to prevent disease and improve physical and cognitive performance.

SeminarNeuroscienceRecording

Characterizing the causal role of large-scale network interactions in supporting complex cognition

Michal Ramot
Weizmann Inst. of Science
May 7, 2024

Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.

SeminarNeuroscience

Thalamocortical feedback circuits selectively control pyramidal neuron excitability

Anthony Holtmaat
University of Geneva, Switzerland
Apr 10, 2024
SeminarNeuroscienceRecording

Reimagining the neuron as a controller: A novel model for Neuroscience and AI

Dmitri 'Mitya' Chklovskii
Flatiron Institute, Center for Computational Neuroscience
Feb 5, 2024

We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.

SeminarNeuroscienceRecording

Visual Monitoring of Visual Appearance as a Feedback System in Dynamic Camouflage

Lorian E. Schweikert
University of North Carolina Wilmington
Nov 13, 2023
SeminarNeuroscience

Doubting the neurofeedback double-blind do participants have residual awareness of experimental purposes in neurofeedback studies?

Timo Kvamme
Aarhus University
Aug 8, 2023

Neurofeedback provides a feedback display which is linked with on-going brain activity and thus allows self-regulation of neural activity in specific brain regions associated with certain cognitive functions and is considered a promising tool for clinical interventions. Recent reviews of neurofeedback have stressed the importance of applying the “double-blind” experimental design where critically the patient is unaware of the neurofeedback treatment condition. An important question then becomes; is double-blind even possible? Or are subjects aware of the purposes of the neurofeedback experiment? – this question is related to the issue of how we assess awareness or the absence of awareness to certain information in human subjects. Fortunately, methods have been developed which employ neurofeedback implicitly, where the subject is claimed to have no awareness of experimental purposes when performing the neurofeedback. Implicit neurofeedback is intriguing and controversial because it runs counter to the first neurofeedback study, which showed a link between awareness of being in a certain brain state and control of the neurofeedback-derived brain activity. Claiming that humans are unaware of a specific type of mental content is a notoriously difficult endeavor. For instance, what was long held as wholly unconscious phenomena, such as dreams or subliminal perception, have been overturned by more sensitive measures which show that degrees of awareness can be detected. In this talk, I will discuss whether we will critically examine the claim that we can know for certain that a neurofeedback experiment was performed in an unconscious manner. I will present evidence that in certain neurofeedback experiments such as manipulations of attention, participants display residual degrees of awareness of experimental contingencies to alter their cognition.

SeminarNeuroscience

Computational models of spinal locomotor circuitry

Simon Danner
Drexel University, Philadelphia, USA
Jun 14, 2023

To effectively move in complex and changing environments, animals must control locomotor speed and gait, while precisely coordinating and adapting limb movements to the terrain. The underlying neuronal control is facilitated by circuits in the spinal cord, which integrate supraspinal commands and afferent feedback signals to produce coordinated rhythmic muscle activations necessary for stable locomotion. I will present a series of computational models investigating dynamics of central neuronal interactions as well as a neuromechanical model that integrates neuronal circuits with a model of the musculoskeletal system. These models closely reproduce speed-dependent gait expression and experimentally observed changes following manipulation of multiple classes of genetically-identified neuronal populations. I will discuss the utility of these models in providing experimentally testable predictions for future studies.

SeminarNeuroscienceRecording

The Effects of Movement Parameters on Time Perception

Keri Anne Gladhill
Florida State University, Tallahassee, Florida.
May 31, 2023

Mobile organisms must be capable of deciding both where and when to move in order to keep up with a changing environment; therefore, a strong sense of time is necessary, otherwise, we would fail in many of our movement goals. Despite this intrinsic link between movement and timing, only recently has research begun to investigate the interaction. Two primary effects that have been observed include: movements biasing time estimates (i.e., affecting accuracy) as well as making time estimates more precise. The goal of this presentation is to review this literature, discuss a Bayesian cue combination framework to explain these effects, and discuss the experiments I have conducted to test the framework. The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement

SeminarNeuroscienceRecording

Internal representation of musical rhythm: transformation from sound to periodic beat

Tomas Lenc
Institute of Neuroscience, UCLouvain, Belgium
May 31, 2023

When listening to music, humans readily perceive and move along with a periodic beat. Critically, perception of a periodic beat is commonly elicited by rhythmic stimuli with physical features arranged in a way that is not strictly periodic. Hence, beat perception must capitalize on mechanisms that transform stimulus features into a temporally recurrent format with emphasized beat periodicity. Here, I will present a line of work that aims to clarify the nature and neural basis of this transformation. In these studies, electrophysiological activity was recorded as participants listened to rhythms known to induce perception of a consistent beat across healthy Western adults. The results show that the human brain selectively emphasizes beat representation when it is not acoustically prominent in the stimulus, and this transformation (i) can be captured non-invasively using surface EEG in adult participants, (ii) is already in place in 5- to 6-month-old infants, and (iii) cannot be fully explained by subcortical auditory nonlinearities. Moreover, as revealed by human intracerebral recordings, a prominent beat representation emerges already in the primary auditory cortex. Finally, electrophysiological recordings from the auditory cortex of a rhesus monkey show a significant enhancement of beat periodicities in this area, similar to humans. Taken together, these findings indicate an early, general auditory cortical stage of processing by which rhythmic inputs are rendered more temporally recurrent than they are in reality. Already present in non-human primates and human infants, this "periodized" default format could then be shaped by higher-level associative sensory-motor areas and guide movement in individuals with strongly coupled auditory and motor systems. Together, this highlights the multiplicity of neural processes supporting coordinated musical behaviors widely observed across human cultures.The experiments herein include: a motor timing task comparing the effects of movement vs non-movement with and without feedback (Exp. 1A & 1B), a transcranial magnetic stimulation (TMS) study on the role of the supplementary motor area (SMA) in transforming temporal information (Exp. 2), and a perceptual timing task investigating the effect of noisy movement on time perception with both visual and auditory modalities (Exp. 3A & 3B). Together, the results of these studies support the Bayesian cue combination framework, in that: movement improves the precision of time perception not only in perceptual timing tasks but also motor timing tasks (Exp. 1A & 1B), stimulating the SMA appears to disrupt the transformation of temporal information (Exp. 2), and when movement becomes unreliable or noisy there is no longer an improvement in precision of time perception (Exp. 3A & 3B). Although there is support for the proposed framework, more studies (i.e., fMRI, TMS, EEG, etc.) need to be conducted in order to better understand where and how this may be instantiated in the brain; however, this work provides a starting point to better understanding the intrinsic connection between time and movement

SeminarNeuroscienceRecording

Feedback control in the nervous system: from cells and circuits to behaviour

Timothy O'Leary
Department of Engineering, University of Cambridge
May 16, 2023

The nervous system is fundamentally a closed loop control device: the output of actions continually influences the internal state and subsequent actions. This is true at the single cell and even the molecular level, where “actions” take the form of signals that are fed back to achieve a variety of functions, including homeostasis, excitability and various kinds of multistability that allow switching and storage of memory. It is also true at the behavioural level, where an animal’s motor actions directly influence sensory input on short timescales, and higher level information about goals and intended actions are continually updated on the basis of current and past actions. Studying the brain in a closed loop setting requires a multidisciplinary approach, leveraging engineering and theory as well as advances in measuring and manipulating the nervous system. I will describe our recent attempts to achieve this fusion of approaches at multiple levels in the nervous system, from synaptic signalling to closed loop brain machine interfaces.

SeminarNeuroscienceRecording

Assigning credit through the "other” connectome

Eric Shea-Brown
University of Washington, Seattle
Apr 19, 2023

Learning in neural networks requires assigning the right values to thousands to trillions or more of individual connections, so that the network as a whole produces the desired behavior. Neuroscientists have gained insights into this “credit assignment” problem through decades of experimental, modeling, and theoretical studies. This has suggested key roles for synaptic eligibility traces and top-down feedback signals, among other factors. Here we study the potential contribution of another type of signaling that is being revealed in greater and greater fidelity by ongoing molecular and genomics studies. This is the set of modulatory pathways local to a given circuit, which form an intriguing second type of connectome overlayed on top of synaptic connectivity. We will share ongoing modeling and theoretical work that explores the possible roles of this local modulatory connectome in network learning.

SeminarNeuroscienceRecording

Effect of Different Influences on Temporal Error Monitoring

Tutku Öztel
Koç University, Istanbul
Mar 29, 2023

Metacognition has long been defined as “cognition about cognition”. One of its aspects is the error monitoring ability, which includes being aware of one’s own errors without external feedback. This ability is mostly investigated in two-alternative forced choice tasks, where the performance has all or none nature in terms of accuracy. The previous literature documents the effect of different influences on the error monitoring ability, such as working memory, feedback and sensorimotor involvement. However, these demonstrations fall short of generalizing to the real life scenarios where the errors often have a magnitude and a direction. To bridge this gap, recent studies showed that humans could keep track of the magnitude and the direction of their errors in temporal, spatial and numerical domains in two metrics: confidence and short-long/few-more judgements. This talk will cover how the documented effects that are obtained in the two alternative forced choices tasks apply to the temporal error monitoring ability. Finally, how magnitude and direction monitoring (i.e., confidence and short-long judgements) can be differentiated as the two indices of temporal error monitoring ability will be discussed.

SeminarNeuroscienceRecording

Implications of Vector-space models of Relational Concepts

Priya Kalra
Western University
Jan 26, 2023

Vector-space models are used frequently to compare similarity and dimensionality among entity concepts. What happens when we apply these models to relational concepts? What is the evidence that such models do apply to relational concepts? If we use such a model, then one implication is that maximizing surface feature variation should improve relational concept learning. For example, in STEM instruction, the effectiveness of teaching by analogy is often limited by students’ focus on superficial features of the source and target exemplars. However, in contrast to the prediction of the vector-space computational model, the strategy of progressive alignment (moving from perceptually similar to different targets) has been suggested to address this issue (Gentner & Hoyos, 2017), and human behavioral evidence has shown benefits from progressive alignment. Here I will present some preliminary data that supports the computational approach. Participants were explicitly instructed to match stimuli based on relations while perceptual similarity of stimuli varied parametrically. We found that lower perceptual similarity reduced accurate relational matching. This finding demonstrates that perceptual similarity may interfere with relational judgements, but also hints at why progressive alignment maybe effective. These are preliminary, exploratory data and I to hope receive feedback on the framework and to start a discussion in a group on the utility of vector-space models for relational concepts in general.

SeminarNeuroscienceRecording

Behavioral Timescale Synaptic Plasticity (BTSP) for biologically plausible credit assignment across multiple layers via top-down gating of dendritic plasticity

A. Galloni
Rutgers
Nov 9, 2022

A central problem in biological learning is how information about the outcome of a decision or behavior can be used to reliably guide learning across distributed neural circuits while obeying biological constraints. This “credit assignment” problem is commonly solved in artificial neural networks through supervised gradient descent and the backpropagation algorithm. In contrast, biological learning is typically modelled using unsupervised Hebbian learning rules. While these rules only use local information to update synaptic weights, and are sometimes combined with weight constraints to reflect a diversity of excitatory (only positive weights) and inhibitory (only negative weights) cell types, they do not prescribe a clear mechanism for how to coordinate learning across multiple layers and propagate error information accurately across the network. In recent years, several groups have drawn inspiration from the known dendritic non-linearities of pyramidal neurons to propose new learning rules and network architectures that enable biologically plausible multi-layer learning by processing error information in segregated dendrites. Meanwhile, recent experimental results from the hippocampus have revealed a new form of plasticity—Behavioral Timescale Synaptic Plasticity (BTSP)—in which large dendritic depolarizations rapidly reshape synaptic weights and stimulus selectivity with as little as a single stimulus presentation (“one-shot learning”). Here we explore the implications of this new learning rule through a biologically plausible implementation in a rate neuron network. We demonstrate that regulation of dendritic spiking and BTSP by top-down feedback signals can effectively coordinate plasticity across multiple network layers in a simple pattern recognition task. By analyzing hidden feature representations and weight trajectories during learning, we show the differences between networks trained with standard backpropagation, Hebbian learning rules, and BTSP.

SeminarNeuroscience

Restructuring cortical feedback circuits

Andreas Keller
Institute of Molecular and Clinical Ophthalmology, Basel
Nov 3, 2022

We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The mechanistic basis for this fundamental perceptual phenomenon has, however, remained obscure. What enables neurons in the visual system to respond to context when the stimulus is not available? While feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input. These signals are likely used to predict input from the feedforward pathway. Preliminary results show that these feedback projections are strongly influenced by experience and learning.

SeminarNeuroscienceRecording

Navigating Increasing Levels of Relational Complexity: Perceptual, Analogical, and System Mappings

Matthew Kmiecik
Evanston Hospital
Oct 20, 2022

Relational thinking involves comparing abstract relationships between mental representations that vary in complexity; however, this complexity is rarely made explicit during everyday comparisons. This study explored how people naturally navigate relational complexity and interference using a novel relational match-to-sample (RMTS) task with both minimal and relationally directed instruction to observe changes in performance across three levels of relational complexity: perceptual, analogy, and system mappings. Individual working memory and relational abilities were examined to understand RMTS performance and susceptibility to interfering relational structures. Trials were presented without practice across four blocks and participants received feedback after each attempt to guide learning. Experiment 1 instructed participants to select the target that best matched the sample, while Experiment 2 additionally directed participants’ attention to same and different relations. Participants in Experiment 2 demonstrated improved performance when solving analogical mappings, suggesting that directing attention to relational characteristics affected behavior. Higher performing participants—those above chance performance on the final block of system mappings—solved more analogical RMTS problems and had greater visuospatial working memory, abstraction, verbal analogy, and scene analogy scores compared to lower performers. Lower performers were less dynamic in their performance across blocks and demonstrated negative relationships between analogy and system mapping accuracy, suggesting increased interference between these relational structures. Participant performance on RMTS problems did not change monotonically with relational complexity, suggesting that increases in relational complexity places nonlinear demands on working memory. We argue that competing relational information causes additional interference, especially in individuals with lower executive function abilities.

SeminarNeuroscience

Identifying central mechanisms of glucocorticoid circadian rhythm dysfunction in breast cancer

Jeremy C. Borniger
Cold Spring Harbor Laboratory
Oct 18, 2022

The circadian release of endogenous glucocorticoids is essential in preparing and synchronizing the body’s daily physiological needs. Disruption in the rhythmic activity of glucocorticoids has been observed in individuals with a variety of cancer types, and blunting of this rhythm has been shown to predict cancer mortality and declines in quality of life. This suggests that a disrupted glucocorticoid rhythm is potentially a shared phenotype across cancers. However, where this phenomenon is driven by the cancer itself, and the causal mechanisms that link glucocorticoid rhythm dysfunction and cancer outcomes remain preliminary at best. The regulation of daily glucocorticoid activity has been well-characterized and is maintained, in part, by the coordinated response of the hypothalamic-pituitary-adrenal (HPA) axis, consisting of the suprachiasmatic nucleus (SCN) and corticotropin-releasing hormone-expressing neurons of the paraventricular nucleus of the hypothalamus (PVNCRH). Consequently, we set out to examine if cancer-induced glucocorticoid dysfunction is regulated by disruptions within these hypothalamic nuclei. In comparison to their tumor-free baseline, mammary tumor-bearing mice exhibited a blunting of glucocorticoid rhythms across multiple timepoints throughout the day, as measured by the overall levels and the slope of fecal corticosterone rhythms, during tumor progression. We further examined how peripheral tumors shape hypothalamic activity within the brain. Serial two-photon tomography for whole-brain cFos imaging suggests a disrupted activation of the PVN in mice with tumors. Additionally, we found GFP labeled CRH+ neurons within the PVN after injection of pseudorabies virus expressing GFP into the tumor, pointing to the PVN as a primary target disrupted by mammary tumors. Preliminary in vivo fiber photometry data show that PVNCRH neurons exhibit enhanced calcium activity during tumor progression, as compared to baseline (no tumor) activity. Taken together, this suggests that there may be an overactive HPA response during tumor progression, which in turn, may result in a subsequent negative feedback on glucocorticoid rhythms. Current studies are examining whether tumor progression modulates SCN calcium activity, how the transcriptional profile of PVNCRH neurons is changed, and test if manipulation of the neurocircuitry surrounding glucocorticoid rhythmicity alters tumor characteristics.

SeminarNeuroscienceRecording

Designing the BEARS (Both Ears) Virtual Reality Training Package to Improve Spatial Hearing in Young People with Bilateral Cochlear Implant

Deborah Vickers
Clinical Neurosciences
Oct 11, 2022

Results: the main areas which were modified based on participatory feedback were the variety of immersive scenarios to cover a range of ages and interests, the number of levels of complexity to ensure small improvements were measured, the feedback and reward schemes to ensure positive reinforcement, and specific provision for participants with balance issues, who had difficulties when using head-mounted displays. The effectiveness of the finalised BEARS suite will be evaluated in a large-scale clinical trial. We have added in additional login options for other members of the family and based on patient feedback we have improved the accompanying reward schemes. Conclusions: Through participatory design we have developed a training package (BEARS) for young people with bilateral cochlear implants. The training games are appropriate for use by the study population and ultimately should lead to patients taking control of their own management and reducing the reliance upon outpatient-based rehabilitation programmes. Virtual reality training provides a more relevant and engaging approach to rehabilitation for young people.

SeminarNeuroscienceRecording

Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions

Kevin Berlemont
Wang Lab, NYU Center for Neural Science
Sep 21, 2022

Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 5, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarNeuroscience

Invariant neural subspaces maintained by feedback modulation

Laura Naumann
Bernstein Center for Computational Neuroscience, Berlin
Jul 14, 2022

This session is a double feature of the Cologne Theoretical Neuroscience Forum and the Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience of the Jülich Research Center.

SeminarNeuroscience

From Computation to Large-scale Neural Circuitry in Human Belief Updating

Tobias Donner
University Medical Center Hamburg-Eppendorf
Jun 29, 2022

Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.

SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 22, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

SeminarNeuroscienceRecording

Trading Off Performance and Energy in Spiking Networks

Sander Keemink
Donders Institute for Brain, Cognition and Behaviour
Jun 1, 2022

Many engineered and biological systems must trade off performance and energy use, and the brain is no exception. While there are theories on how activity levels are controlled in biological networks through feedback control (homeostasis), it is not clear what the effects on population coding are, and therefore how performance and energy can be traded off. In this talk we will consider this tradeoff in auto-encoding networks, in which there is a clear definition of performance (the coding loss). We first show how SNNs follow a characteristic trade-off curve between activity levels and coding loss, but that standard networks need to be retrained to achieve different tradeoff points. We next formalize this tradeoff with a joint loss function incorporating coding loss (performance) and activity loss (energy use). From this loss we derive a class of spiking networks which coordinates its spiking to minimize both the activity and coding losses -- and as a result can dynamically adjust its coding precision and energy use. The network utilizes several known activity control mechanisms for this --- threshold adaptation and feedback inhibition --- and elucidates their potential function within neural circuits. Using geometric intuition, we demonstrate how these mechanisms regulate coding precision, and thereby performance. Lastly, we consider how these insights could be transferred to trained SNNs. Overall, this work addresses a key energy-coding trade-off which is often overlooked in network studies, expands on our understanding of homeostasis in biological SNNs, as well as provides a clear framework for considering performance and energy use in artificial SNNs.

SeminarNeuroscience

Feedback controls what we see

Andreas Keller
Institute of Molecular and Clinical Ophthalmology Basel
May 30, 2022

We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The visual system uses visual context to predict the content of the stimulus. What enables neurons in the visual system to respond to context when the stimulus is not available? In cortex, sensory processing is based on a combination of feedforward information arriving from sensory organs, and feedback information that originates in higher-order areas. Whereas feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the exciting discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas, in particular when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input.

SeminarNeuroscience

Unchanging and changing: hardwired taste circuits and their top-down control

Hao Jin
Columbia
May 25, 2022

The taste system detects 5 major categories of ethologically relevant stimuli (sweet, bitter, umami, sour and salt) and accordingly elicits acceptance or avoidance responses. While these taste responses are innate, the taste system retains a remarkable flexibility in response to changing external and internal contexts. Taste chemicals are first recognized by dedicated taste receptor cells (TRCs) and then transmitted to the cortex via a multi-station relay. I reasoned that if I could identify taste neural substrates along this pathway, it would provide an entry to decipher how taste signals are encoded to drive innate response and modulated to facilitate adaptive response. Given the innate nature of taste responses, these neural substrates should be genetically identifiable. I therefore exploited single-cell RNA sequencing to isolate molecular markers defining taste qualities in the taste ganglion and the nucleus of the solitary tract (NST) in the brainstem, the two stations transmitting taste signals from TRCs to the brain. How taste information propagates from the ganglion to the brain is highly debated (i.e., does taste information travel in labeled-lines?). Leveraging these genetic handles, I demonstrated one-to-one correspondence between ganglion and NST neurons coding for the same taste. Importantly, inactivating one ‘line’ did not affect responses to any other taste stimuli. These results clearly showed that taste information is transmitted to the brain via labeled lines. But are these labeled lines aptly adapted to the internal state and external environment? I studied the modulation of taste signals by conflicting taste qualities in the concurrence of sweet and bitter to understand how adaptive taste responses emerge from hardwired taste circuits. Using functional imaging, anatomical tracing and circuit mapping, I found that bitter signals suppress sweet signals in the NST via top-down modulation by taste cortex and amygdala of NST taste signals. While the bitter cortical field provides direct feedback onto the NST to amplify incoming bitter signals, it exerts negative feedback via amygdala onto the incoming sweet signal in the NST. By manipulating this feedback circuit, I showed that this top-down control is functionally required for bitter evoked suppression of sweet taste. These results illustrate how the taste system uses dedicated feedback lines to finely regulate innate behavioral responses and may have implications for the context-dependent modulation of hardwired circuits in general.

SeminarNeuroscience

Synthetic and natural images unlock the power of recurrency in primary visual cortex

Andreea Lazar
Ernst Strüngmann Institute (ESI) for Neuroscience
May 20, 2022

During perception the visual system integrates current sensory evidence with previously acquired knowledge of the visual world. Presumably this computation relies on internal recurrent interactions. We record populations of neurons from the primary visual cortex of cats and macaque monkeys and find evidence for adaptive internal responses to structured stimulation that change on both slow and fast timescales. In the first experiment, we present abstract images, only briefly, a protocol known to produce strong and persistent recurrent responses in the primary visual cortex. We show that repetitive presentations of a large randomized set of images leads to enhanced stimulus encoding on a timescale of minutes to hours. The enhanced encoding preserves the representational details required for image reconstruction and can be detected in post-exposure spontaneous activity. In a second experiment, we show that the encoding of natural scenes across populations of V1 neurons is improved, over a timescale of hundreds of milliseconds, with the allocation of spatial attention. Given the hierarchical organization of the visual cortex, contextual information from the higher levels of the processing hierarchy, reflecting high-level image regularities, can inform the activity in V1 through feedback. We hypothesize that these fast attentional boosts in stimulus encoding rely on recurrent computations that capitalize on the presence of high-level visual features in natural scenes. We design control images dominated by low-level features and show that, in agreement with our hypothesis, the attentional benefits in stimulus encoding vanish. We conclude that, in the visual system, powerful recurrent processes optimize neuronal responses, already at the earliest stages of cortical processing.

SeminarNeuroscienceRecording

Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems

Horacio Rotstein
New Jersey Institute of Technology
Apr 13, 2022

Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.

SeminarNeuroscienceRecording

Visualization and manipulation of our perception and imagery by BCI

Takufumi Yanagisawa
Osaka University
Apr 1, 2022

We have been developing Brain-Computer Interface (BCI) using electrocorticography (ECoG) [1] , which is recorded by electrodes implanted on brain surface, and magnetoencephalography (MEG) [2] , which records the cortical activities non-invasively, for the clinical applications. The invasive BCI using ECoG has been applied for severely paralyzed patient to restore the communication and motor function. The non-invasive BCI using MEG has been applied as a neurofeedback tool to modulate some pathological neural activities to treat some neuropsychiatric disorders. Although these techniques have been developed for clinical application, BCI is also an important tool to investigate neural function. For example, motor BCI records some neural activities in a part of the motor cortex to generate some movements of external devices. Although our motor system consists of complex system including motor cortex, basal ganglia, cerebellum, spinal cord and muscles, the BCI affords us to simplify the motor system with exactly known inputs, outputs and the relation of them. We can investigate the motor system by manipulating the parameters in BCI system. Recently, we are developing some BCIs to visualize and manipulate our perception and mental imagery. Although these BCI has been developed for clinical application, the BCI will be useful to understand our neural system to generate the perception and imagery. In this talk, I will introduce our study of phantom limb pain [3] , that is controlled by MEG-BCI, and the development of a communication BCI using ECoG [4] , that enable the subject to visualize the contents of their mental imagery. And I would like to discuss how much we can control our cortical activities that represent our perception and mental imagery. These examples demonstrate that BCI is a promising tool to visualize and manipulate the perception and imagery and to understand our consciousness. References 1. Yanagisawa, T., Hirata, M., Saitoh, Y., Kishima, H., Matsushita, K., Goto, T., Fukuma, R., Yokoi, H., Kamitani, Y., and Yoshimine, T. (2012). Electrocorticographic control of a prosthetic arm in paralyzed patients. AnnNeurol 71, 353-361. 2. Yanagisawa, T., Fukuma, R., Seymour, B., Hosomi, K., Kishima, H., Shimizu, T., Yokoi, H., Hirata, M., Yoshimine, T., Kamitani, Y., et al. (2016). Induced sensorimotor brain plasticity controls pain in phantom limb patients. Nature communications 7, 13209. 3. Yanagisawa, T., Fukuma, R., Seymour, B., Tanaka, M., Hosomi, K., Yamashita, O., Kishima, H., Kamitani, Y., and Saitoh, Y. (2020). BCI training to move a virtual hand reduces phantom limb pain: A randomized crossover trial. Neurology 95, e417-e426. 4. Ryohei Fukuma, Takufumi Yanagisawa, Shinji Nishimoto, Hidenori Sugano, Kentaro Tamura, Shota Yamamoto, Yasushi Iimura, Yuya Fujita, Satoru Oshino, Naoki Tani, Naoko Koide-Majima, Yukiyasu Kamitani, Haruhiko Kishima (2022). Voluntary control of semantic neural representations by imagery with conflicting visual stimulation. arXiv arXiv:2112.01223.

SeminarNeuroscienceRecording

Brain-body interactions that modulate fear

Alexandra Klein
Kheirbeck lab, UCSF
Mar 30, 2022

In most animals including in humans, emotions occur together with changes in the body, such as variations in breathing or heart rate, sweaty palms, or facial expressions. It has been suggested that this interoceptive information acts as a feedback signal to the brain, enabling adaptive modulation of emotions that is essential for survival. As such, fear, one of our basic emotions, must be kept in a functional balance to minimize risk-taking while allowing for the pursuit of essential needs. However, the neural mechanisms underlying this adaptive modulation of fear remain poorly understood. In this talk, I want to present and discuss the data from my PhD work where we uncover a crucial role for the interoceptive insular cortex in detecting changes in heart rate to maintain an equilibrium between the extinction and maintenance of fear memories in mice.

SeminarNeuroscienceRecording

Invariant neural subspaces maintained by feedback modulation

Henning Sprekeler
TU Berlin
Feb 19, 2022

Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.

SeminarNeuroscience

Keeping your Brain in Balance: the Ups and Downs of Homeostatic Plasticity (virtual)

Gina Turrigiano, PhD
Professor, Department of Biology, Brandeis University, USA
Feb 17, 2022

Our brains must generate and maintain stable activity patterns over decades of life, despite the dramatic changes in circuit connectivity and function induced by learning and experience-dependent plasticity. How do our brains acheive this balance between opposing need for plasticity and stability? Over the past two decades, we and others have uncovered a family of “homeostatic” negative feedback mechanisms that are theorized to stabilize overall brain activity while allowing specific connections to be reconfigured by experience. Here I discuss recent work in which we demonstrate that individual neocortical neurons in freely behaving animals indeed have a homeostatic activity set-point, to which they return in the face of perturbations. Intriguingly, this firing rate homeostasis is gated by sleep/wake states in a manner that depends on the direction of homeostatic regulation: upward-firing rate homeostasis occurs selectively during periods of active wake, while downward-firing rate homeostasis occurs selectively during periods of sleep, suggesting that an important function of sleep is to temporally segregate bidirectional plasticity. Finally, we show that firing rate homeostasis is compromised in an animal model of autism spectrum disorder. Together our findings suggest that loss of homeostatic plasticity in some neurological disorders may render central circuits unable to compensate for the normal perturbations induced by development and learning.

SeminarNeuroscienceRecording

NMC4 Short Talk: Predictive coding is a consequence of energy efficiency in recurrent neural networks

Abdullahi Ali
Donders Institute for Brain
Dec 2, 2021

Predictive coding represents a promising framework for understanding brain function, postulating that the brain continuously inhibits predictable sensory input, ensuring a preferential processing of surprising elements. A central aspect of this view on cortical computation is its hierarchical connectivity, involving recurrent message passing between excitatory bottom-up signals and inhibitory top-down feedback. Here we use computational modelling to demonstrate that such architectural hard-wiring is not necessary. Rather, predictive coding is shown to emerge as a consequence of energy efficiency, a fundamental requirement of neural processing. When training recurrent neural networks to minimise their energy consumption while operating in predictive environments, the networks self-organise into prediction and error units with appropriate inhibitory and excitatory interconnections and learn to inhibit predictable sensory input. We demonstrate that prediction units can reliably be identified through biases in their median preactivation, pointing towards a fundamental property of prediction units in the predictive coding framework. Moving beyond the view of purely top-down driven predictions, we demonstrate via virtual lesioning experiments that networks perform predictions on two timescales: fast lateral predictions among sensory units and slower prediction cycles that integrate evidence over time. Our results, which replicate across two separate data sets, suggest that predictive coding can be interpreted as a natural consequence of energy efficiency. More generally, they raise the question which other computational principles of brain function can be understood as a result of physical constraints posed by the brain, opening up a new area of bio-inspired, machine learning-powered neuroscience research.

ePosterNeuroscience

Cortico-cortical feedback to visual areas can explain reactivation of latent memories during working memory retention

Noa Krause, Rosanne Rademaker

Bernstein Conference 2024

ePosterNeuroscience

Cortical feedback shapes high order structure of population activity to improve sensory coding

Augustine(Xiaoran) Yuan, Laura Busse, Wiktor Młynarski

Bernstein Conference 2024

ePosterNeuroscience

Dual role, single pathway: A pyramidal cell model of feedback integration in function and learning

Daniel Schmid, Christian Jarvers, Timo Oess, Heiko Neumann

Bernstein Conference 2024

ePosterNeuroscience

A feedback control algorithm for online learning in Spiking Neural Networks and Neuromorphic devices

Matteo Saponati, Chiara De Luca, Giacomo Indiveri, Benjamin Grewe

Bernstein Conference 2024

ePosterNeuroscience

Force coupling leads to neural coordination via enviromental feedback

Claudius Gros, Elias Fischer, Bulcsu Sandor

Bernstein Conference 2024

ePosterNeuroscience

Inhibitory columnar feedback neurons are required for peripheral visual processing

Miriam Henning, Teresa Lüffe, Daryl Goal, Thomas Clandinin, Marion Silies

Bernstein Conference 2024

ePosterNeuroscience

Object detection with deep learning and attention feedback loops

Rene Larisch, Fred Hamker

Bernstein Conference 2024

ePosterNeuroscience

The role of feedback in dynamic inference for spatial navigation under uncertainty

Albert Chen, Jan Drugowitsch

Bernstein Conference 2024

ePosterNeuroscience

Basal Ganglia feedback loops as possible candidates for generation of beta oscillation

Shiva Azizpourlindi,Arthur Leblois

COSYNE 2022

ePosterNeuroscience

Cerebro-cerebellar networks facilitate learning through feedback decoupling

Ellen Boven,Joseph Pemberton,Paul Chadderton,Richard Apps,Rui Ponte Costa

COSYNE 2022

ePosterNeuroscience

A circuit library for exploring the functional logic of massive feedback loops in Drosophila brain

Mehmet Turkcan,Yiyin Zhou,Aurel A. Lazar

COSYNE 2022

ePosterNeuroscience

Feedforward and feedback computations in V1 and V2 in a hierarchical Variational Autoencoder

Ferenc Csikor,Balázs Meszéna,Gergő Orbán

COSYNE 2022

ePosterNeuroscience

A feedback model for predicting targeted perturbations of proprioceptors during fly walking

Pierre Karashchuk,Sarah Walling-Bell,Chris Dallmann,John Tuthill,Bing Brunton

COSYNE 2022

ePosterNeuroscience

A feedback model for predicting targeted perturbations of proprioceptors during fly walking

Pierre Karashchuk,Sarah Walling-Bell,Chris Dallmann,John Tuthill,Bing Brunton

COSYNE 2022

ePosterNeuroscience

Feedforward and feedback computations in V1 and V2 in a hierarchical Variational Autoencoder

Ferenc Csikor,Balázs Meszéna,Gergő Orbán

COSYNE 2022

ePosterNeuroscience

An interpretable spline-LNP model to characterize feedforward and feedback processing in mouse dLGN

Lisa Schmors,Yannik Bauer,Ziwei Huang,Lukas Meyerolbersleben,Simon Renner,Ann H. Kotkat,Davide Crombie,Sacha Sokoloski,Laura Busse,Philipp Berens

COSYNE 2022

ePosterNeuroscience

An interpretable spline-LNP model to characterize feedforward and feedback processing in mouse dLGN

Lisa Schmors,Yannik Bauer,Ziwei Huang,Lukas Meyerolbersleben,Simon Renner,Ann H. Kotkat,Davide Crombie,Sacha Sokoloski,Laura Busse,Philipp Berens

COSYNE 2022

ePosterNeuroscience

Invariant neural subspaces maintained by feedback modulation

Laura Bella Naumann,Joram Keijser,Henning Sprekeler

COSYNE 2022

ePosterNeuroscience

Invariant neural subspaces maintained by feedback modulation

Laura Bella Naumann,Joram Keijser,Henning Sprekeler

COSYNE 2022

ePosterNeuroscience

Meta-learning biologically plausible feedback learning rules

Klara Kaleb,Claudia Clopath

COSYNE 2022

ePosterNeuroscience

Meta-learning biologically plausible feedback learning rules

Klara Kaleb,Claudia Clopath

COSYNE 2022

ePosterNeuroscience

Neural optimal feedback control with local learning rules

Johannes Friedrich,Siavash Golkar,Shiva Farashahi,Alexander Genkin,Anirvan Sengupta,Dmitri Chklovskii

COSYNE 2022

ePosterNeuroscience

Neural optimal feedback control with local learning rules

Johannes Friedrich,Siavash Golkar,Shiva Farashahi,Alexander Genkin,Anirvan Sengupta,Dmitri Chklovskii

COSYNE 2022

ePosterNeuroscience

Principled credit assignment with strong feedback through Deep Feedback Control

Alexander Meulemans,Matilde Tristany Farinha,María R. Cervera,João Sacramento,Benjamin F. Grewe

COSYNE 2022

ePosterNeuroscience

Principled credit assignment with strong feedback through Deep Feedback Control

Alexander Meulemans,Matilde Tristany Farinha,María R. Cervera,João Sacramento,Benjamin F. Grewe

COSYNE 2022

ePosterNeuroscience

Sensory feedback can drive adaptation in motor cortex and facilitate generalization

Barbara Feulner,Matthew G. Perich,Lee E. Miller,Claudia Clopath,Juan A. Gallego

COSYNE 2022

ePosterNeuroscience

Sensory feedback can drive adaptation in motor cortex and facilitate generalization

Barbara Feulner,Matthew G. Perich,Lee E. Miller,Claudia Clopath,Juan A. Gallego

COSYNE 2022

ePosterNeuroscience

Unifying model of contextual modulation with feedback from higher visual areas

Serena Di Santo,Mario Dipoppa,Andreas Keller,Morgane Roth,Massimo Scanziani,Kenneth D. Miller

COSYNE 2022

ePosterNeuroscience

Unifying model of contextual modulation with feedback from higher visual areas

Serena Di Santo,Mario Dipoppa,Andreas Keller,Morgane Roth,Massimo Scanziani,Kenneth D. Miller

COSYNE 2022

ePosterNeuroscience

Cerebro-cerebellar networks facilitate learning through feedback decoupling

Ellen Boven, Joseph Pemberton, Paul Chadderton, Richard Apps, Rui Ponte Costa

COSYNE 2023

ePosterNeuroscience

Cortical-bulbar feedback supports behavioral flexibility during rule reversal

Diego Eduardo Hernández Trejo, Andrei Ciuparu, Pedro Garcia da Silva, Cristina Velázquez, Raul Mureşan, Dinu Albeanu

COSYNE 2023

ePosterNeuroscience

Ctrl-TNDM: Decoding feedback-driven movement corrections from motor cortex neurons

Nina Kudryashova, Matthew Perich, Lee Miller, Matthias Hennig

COSYNE 2023

ePosterNeuroscience

Excitatory-inhibitory cortical feedback enables efficient hierarchical credit assignment

Will Greedy, Heng Wei Zhu, Joseph Pemberton, Jack Mellor, Rui Ponte Costa

COSYNE 2023

ePosterNeuroscience

A biologically-plausible learning rule using reciprocal feedback connections

Mia Cameron, Yusi Chen, Terrence Sejnowski

COSYNE 2025

ePosterNeuroscience

A Feedback Mechanism in Generative Networks to Remove Visual Degradation

Bahareh Tolooshams, Yuelin Shi, Anima Anandkumar, Doris Tsao

COSYNE 2025

ePosterNeuroscience

Human Intracranial Oscillatory Signatures of Aberrant Counterfactual Feedback Processing in Depression

Alexandra Fink, Salman Qasim, Lizbeth Nunez, Jacqueline Overton, Xiaosi Gu, Ignacio Saez

COSYNE 2025

ePosterNeuroscience

Integration of corollary discharge and sensory feedback signals in somatosensory cortex

Xinyue An, Raeed Chowdhury, Kyle Blum, Lee Miller, Joshua Glaser

COSYNE 2025

ePosterNeuroscience

Prefrontal cortex subregions provide distinct visual and behavioral feedback modulation to the primary Visual cortex

Sofie Ahrlund-Richter, Yuma Osako, Kyle Jenks, Emma Odom, Haoyang Huang, Don Arnold, Mriganka Sur

COSYNE 2025

ePosterNeuroscience

A spiking neuromechanical model of the zebrafish to investigate the role of axial proprioceptive sensory feedback during locomotion

Alessandro Pazzaglia, Andrea Ferrario, Jonathan Arreguit, Laurence Picton, David Madrid, Abdel El Manira, Auke Ijspeert

COSYNE 2025

ePosterNeuroscience

Training Large Neural Networks With Low-Dimensional Error Feedback

Maher Hanut, Jonathan Kadmon

COSYNE 2025

feedback coverage

90 items

Seminar50
ePoster40
Domain spotlight

Explore how feedback research is advancing inside Neuro.

Visit domain