← Back

Coding

Topic spotlight
TopicWorld Wide

coding

Discover seminars, jobs, and research tagged with coding across World Wide.
107 curated items60 Seminars40 ePosters7 Positions
Updated in 2 months
107 items · coding
107 results
SeminarNeuroscience

Decoding stress vulnerability

Stamatina Tzanoulinou
University of Lausanne, Faculty of Biology and Medicine, Department of Biomedical Sciences
Feb 19, 2026

Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 9, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

Position

Dr Clyde Francks

Max Planck Institute for Psycholinguistics, Language & Genetics dept.
Nijmegen, The Netherlands
Dec 5, 2025

A postdoctoral position (2 years duration) on brain imaging genomics is available at the Language and Genetics Department of the Max Planck Institute, Nijmegen, the Netherlands. We seek a postdoctoral researcher to investigate links between gene expression in the human cerebral cortex and inter-individual variations in brain and behaviour. The position will be embedded within the Imaging Genomics group of the host department, and will be carried out in collaboration with leading researchers at the Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen. The successful candidate will join an innovative research program that is seeking to characterize the brain’s molecular infrastructure for language, and integrate this with data on individual differences in brain and behaviour. This is an initiative of the Language in Interaction (LiI) consortium, sponsored by a major grant from the Netherlands Organisation for Scientific Research. We recently generated a unique gene expression dataset using spatial transcriptomics from regions of the human cerebral cortex that are important for language. The postdoctoral scientist will take the lead on integrative analyses linking gene expression to genetic association, making use of large-scale resources such as the UK Biobank (currently data from over 30,000 individuals with brain image and genetic data – including common single nucleotide polymorphisms and rare genetic variants) and the international GenLang Consortium (data from up to 34,000 individuals on reading- and language-related abilities together with genetic data). One major goal is to apply a recently-optimized pipeline for measuring white matter tracts in biobank-scale diffusion tensor imaging data, and subsequently to apply genetic techniques such as genome-wide association analysis, partitioned heritability analysis, and polygenic score analysis. The project therefore offers the possibility to learn state-of-the-art techniques in both brain image analysis and genetic analysis.

Position

Prof. Dr. S.E. Fisher

Max Planck Institute for Psycholinguistics
Nijmegen, Netherlands
Dec 5, 2025

The successful candidate will join an innovative research program that is characterizing individual variation in language skills at behavioural, neurobiological, and genetic levels, an initiative of the Language in Interaction (LiI) consortium, sponsored by a major grant from the Netherlands Organisation for Scientific Research. The LiI consortium has developed a computer-based test battery to assess core skills underlying speaking and listening across a broad spectrum of abilities, and has already applied this to hundreds of young adults from the general population, a subset of whom are also tested with functional MRI. In parallel, saliva sampling has been used to collect DNA. The postdoctoral scientist will take the lead on the genetic aspects of the project. Specifically, they will process and analyse genome-wide genotype data from this unique resource, and use methods for analyzing polygenic contributions to human traits in order to trace genetic links to cognitive skills, childhood learning disorders, and MRI-based measures of brain structure/function, integrating with independent datasets available at the Language and Genetics department. The project will be further scaled up by applying online versions of the LiI battery to large pre-existing population-based cohorts with available genome-wide genotypes. The postdoctoral scientist will also foster connections to ongoing work by GenLang, an international network of researchers carrying out genetic association meta-analyses of multiple speech/language/reading-related cohorts across the world.

Position

Rava Azeredo da Silveira

ENS, Paris and IOB, University of Basel
Paris (France) and/or Basel (Switzerland)
Dec 5, 2025

Several postdoctoral openings in the lab of Rava Azeredo da Silveira (Paris & Basel) The lab of Rava Azeredo da Silveira invites applications for Postdoctoral Researcher positions at ENS, Paris, and IOB, an associated institute of the University of Basel. Research questions will be chosen from a broad range of topics in theoretical/computational neuroscience and cognitive science (see the description of the lab’s activity, below). One of the postdoc positions to be filled in Basel will be part of a collaborative framework with Michael Woodford (Columbia University) and will involve projects relating the study of decision making to models of perception and memory. Candidates with backgrounds in mathematics, statistics, artificial intelligence, physics, computer science, engineering, biology, and psychology are welcome. Experience with data analysis and proficiency with numerical methods, in addition to familiarity with neuroscience topics and mathematical and statistical methods, are desirable. Equally desirable are a spirit of intellectual adventure, eagerness, and drive. The positions will come with highly competitive work conditions and salaries. Application deadline: Applications will be reviewed starting on 1 November 2020. How to apply: Please send the following information in one single PDF, to silveira@iob.ch: 1. letter of motivation; 2. statement of research interests, limited to two pages; 3. curriculum vitæ including a list of publications; 4. any relevant publications that you wish to showcase. In addition, please arrange for three letters of recommendations to be sent to the same email address. In all email correspondence, please include the mention “APPLICATION-POSTDOC” in the subject header, otherwise the application will not be considered. * ENS, together with a number of neighboring institutions (College de France, Institut Curie, ESPCI, Sorbonne Université, and Institut Pasteur), offers a rich scientific and intellectual environment, with a strong representation in computational neuroscience and related fields. * IOB is a research institute combining basic and clinical research. Its mission is to drive innovations in understanding vision and its diseases and develop new therapies for vision loss. IOB is an equal-opportunity employer with family-friendly work policies. * The Silveira Lab focuses on a range of topics, which, however, are tied together through a central question: How does the brain represent and manipulate information? Among the more concrete approaches to this question, the lab analyses and models neural activity in circuits that can be identified, recorded from, and perturbed experimentally, such as visual neural circuits in the retina and the cortex. Establishing links between physiological specificity and the structure of neural activity yields an understanding of circuits as building blocks of cerebral information processing. On a more abstract level, the lab investigates the representation of information in populations of neurons, from a statistical and algorithmic—rather than mechanistic—point of view, through theories of coding and data analyses. These studies aim at understanding the statistical nature of high-dimensional neural activity in different conditions, and how this serves to encode and process information from the sensory world. In the context of cognitive studies, the lab investigates mental processes such as inference, learning, and decision-making, through both theoretical developments and behavioral experiments. A particular focus is the study of neural constraints and limitations and, further, their impact on mental processes. Neural limitations impinge on the structure and variability of mental representations, which in turn inform the cognitive algorithms that produce behavior. The lab explores the nature of neural limitations, mental representations, and cognitive algorithms, and their interrelations.

Position

Pranav Nerurkar

Utkarsh Minds Institute
Online
Dec 5, 2025

Join our comprehensive online internship program focusing on Two Sample Hypothesis Testing, designed for students eager to delve into the world of statistical analysis and coding. This internship offers a unique blend of theoretical learning and practical application, providing participants with a robust understanding of hypothesis testing using real-world data. Key features include Interactive Video Lectures, Hands-On Coding Assignments, Practical Applications, Mentorship and Support, and Certification.

Position

I-Chun Lin, PhD

Gatsby Computational Neuroscience Unit, UCL
Gatsby Computational Neuroscience Unit, UCL
Dec 5, 2025

The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning in brains and machines; inference, coding and neural dynamics; Bayesian and kernel methods, and deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The Unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical/computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment.

PositionComputational Neuroscience

Dr. Gunnar Blohm

Queen's University
Queen's University, Kingston, ON
Dec 5, 2025

I'm looking for postdocs who'd like to apply for the Connected Minds PDFs with me and collaborators to work on the following potential projects: 1. explainable neuroAI for ANN / SNN models of motor control 2. neuromorphic robotic control 3. neurorobotic artistic performance 4. whole brain motor control networks identified through MEG and inverse optimal control. More information about the 2-yr Connected Minds PDF application, including eligibility criteria can be found here: https://www.yorku.ca/research/connected-minds/postdoctoral-fellowships/. I will of course help assembling the advisory team, writing the research project description and provide general guidance for the application. Feel free to check out my lab's website <http://compneurosci.com/> and WIKI <http://compneurosci.com/wiki/index.php/Main_Page> to get a better sense of who we are and how we work...

SeminarNeuroscience

Prefrontal-thalamic goal-state coding segregates navigation episodes into spatially consistent parallel hippocampal maps

Hiroshi Ito
University of Lausanne
Nov 30, 2025
SeminarNeuroscience

Top-down control of neocortical threat memory

Prof. Dr. Johannes Letzkus
Universität Freiburg, Germany
Nov 11, 2025

Accurate perception of the environment is a constructive process that requires integration of external bottom-up sensory signals with internally-generated top-down information reflecting past experiences and current aims. Decades of work have elucidated how sensory neocortex processes physical stimulus features. In contrast, examining how memory-related-top-down information is encoded and integrated with bottom-up signals has long been challenging. Here, I will discuss our recent work pinpointing the outermost layer 1 of neocortex as a central hotspot for processing of experience-dependent top-down information threat during perception, one of the most fundamentally important forms of sensation.

SeminarNeuroscienceRecording

Memory Decoding Journal Club: Functional connectomics reveals general wiring rule in mouse visual cortex

Ariel Zeleznikow-Johnston
Monash University
Oct 20, 2025

Functional connectomics reveals general wiring rule in mouse visual cortex

SeminarNeuroscienceRecording

Memory Decoding Journal Club: "Connectomic traces of Hebbian plasticity in the entorhinalhippocampal system

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Oct 6, 2025

Connectomic traces of Hebbian plasticity in the entorhinalhippocampal system

SeminarNeuroscienceRecording

Memory Decoding Journal Club: Distinct synaptic plasticity rules operate across dendritic compartments in vivo during learning

Ken Hayworth
Co-Founder and Chief Science Officer, Carboncopies
Sep 22, 2025

Distinct synaptic plasticity rules operate across dendritic compartments in vivo during learning

SeminarNeuroscience

Unpacking the role of the medial septum in spatial coding in the medial entorhinal cortex

Jennifer Robinson
McGill University
Sep 10, 2025
SeminarNeuroscienceRecording

Memory Decoding Journal Club: A combinatorial neural code for long-term motor memory

Ariel Zeleznikow-Johnston
Monash University
Sep 8, 2025

A combinatorial neural code for long-term motor memory

SeminarNeuroscienceRecording

Memory Decoding Journal Club: Behavioral time scale synaptic plasticity underlies CA1 place fields

Kenneth Hayworth
Co-Founder and Chief Science Officer, Carboncopies
Aug 25, 2025

Behavioral time scale synaptic plasticity underlies CA1 place fields

SeminarNeuroscienceRecording

Memory Decoding Journal Club: "Connectomic reconstruction of a cortical column" cortical column

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Aug 11, 2025

Connectomic reconstruction of a cortical column

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
Jul 8, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscienceRecording

Memory Decoding Journal Club: Neocortical synaptic engrams for remote contextual memories

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Jun 16, 2025

Neocortical synaptic engrams for remote contextual memories

SeminarNeuroscience

Neural circuits underlying sleep structure and functions

Antoine Adamantidis
University of Bern
Jun 12, 2025

Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.

SeminarNeuroscience

From Spiking Predictive Coding to Learning Abstract Object Representation

Prof. Jochen Triesch
Frankfurt Institute for Advanced Studies
Jun 11, 2025

In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.

SeminarNeuroscienceRecording

Memory Decoding Journal Club: "Structure and function of the hippocampal CA3 module

Kenneth Hayworth
Co-Founder and Chief Science Officer, Carboncopies
Jun 2, 2025

Structure and function of the hippocampal CA3 module

SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 22, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscienceRecording

Memory Decoding Journal Club: "Synaptic architecture of a memory engram in the mouse hippocampus

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
May 19, 2025

Synaptic architecture of a memory engram in the mouse hippocampus

SeminarNeuroscience

Single-neuron correlates of perception and memory in the human medial temporal lobe

Prof. Dr. Dr. Florian Mormann
University of Bonn, Germany
May 13, 2025

The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
May 13, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscienceRecording

Motor learning selectively strengthens cortical and striatal synapses of motor engram neurons

Ariel Zeleznikow-Johnston
Monash University
May 5, 2025

Join Us for the Memory Decoding Journal Club! A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience. This time, we’re diving into a groundbreaking paper: "Motor learning selectively strengthens cortical and striatal synapses of motor engram neurons

SeminarNeuroscienceRecording

Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala

Kenneth Hayworth
Carboncopies Foundation & BPF Aspirational Neuroscience
Apr 21, 2025

Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala. This study by Marios Abatis et al. demonstrates how fear conditioning strengthens synaptic connections between engram cells in the lateral amygdala, revealed through optogenetic identification of neuronal ensembles and electrophysiological measurements. The work provides crucial insights into memory formation mechanisms at the synaptic level, with implications for understanding anxiety disorders and developing targeted interventions. Presented by Dr. Kenneth Hayworth, this journal club will explore the paper's methodology linking engram cell reactivation with synaptic plasticity measurements, and discuss implications for memory decoding research.

SeminarNeuroscienceRecording

Memory Decoding Journal Club: Reconstructing a new hippocampal engram for systems reconsolidation and remote memory updating

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Apr 7, 2025

Join us for the Memory Decoding Journal Club, a collaboration between the Carboncopies Foundation and BPF Aspirational Neuroscience. This month, we're diving into a groundbreaking paper: 'Reconstructing a new hippocampal engram for systems reconsolidation and remote memory updating' by Bo Lei, Bilin Kang, Yuejun Hao, Haoyu Yang, Zihan Zhong, Zihan Zhai, and Yi Zhong from Tsinghua University, Beijing Academy of Artificial Intelligence, IDG/McGovern Institute of Brain Research, and Peking Union Medical College. Dr. Randal Koene will guide us through an engaging discussion on these exciting findings and their implications for neuroscience and memory research.

SeminarNeuroscience

Active Predictive Coding and the Primacy of Actions in Natural and Artificial Intelligence

Rajesh Rao
University of Washington
Apr 6, 2025
SeminarNeuroscience

Decoding ketamine: Neurobiological mechanisms underlying its rapid antidepressant efficacy

Zanos Panos
Translational Neuropharmacology Lab, University of Cyprus, Center for Applied Neurosience & Department of Psychology, Nicosia, Cyprus
Apr 3, 2025

Unlike traditional monoamine-based antidepressants that require weeks to exert effects, ketamine alleviates depression within hours, though its clinical use is limited by side effects. While ketamine was initially thought to work primarily through NMDA receptor (NMDAR) inhibition, our research reveals a more complex mechanism. We demonstrate that NMDAR inhibition alone cannot explain ketamine's sustained antidepressant effects, as other NMDAR antagonists like MK-801 lack similar efficacy. Instead, the (2R,6R)-hydroxynorketamine (HNK) metabolite appears critical, exhibiting antidepressant effects without ketamine's side effects. Paradoxically, our findings suggest an inverted U-shaped dose-response relationship where excessive NMDAR inhibition may actually impede antidepressant efficacy, while some level of NMDAR activation is necessary. The antidepressant actions of ketamine and (2R,6R)-HNK require AMPA receptor activation, leading to synaptic potentiation and upregulation of AMPA receptor subunits GluA1 and GluA2. Furthermore, NMDAR subunit GluN2A appears necessary and possibly sufficient for these effects. This research establishes NMDAR-GluN2A activation as a common downstream effector for rapid-acting antidepressants, regardless of their initial targets, offering promising directions for developing next-generation antidepressants with improved efficacy and reduced side effects.

SeminarNeuroscienceRecording

Altered grid-like coding in early blind people and the role of vision in conceptual navigation

Roberto Bottini
CIMeC, University of Trento
Mar 5, 2025
SeminarNeuroscience

Circuit Mechanisms of Remote Memory

Lauren DeNardo, PhD
Department of Physiology, David Geffen School of Medicine, UCLA
Feb 10, 2025

Memories of emotionally-salient events are long-lasting, guiding behavior from minutes to years after learning. The prelimbic cortex (PL) is required for fear memory retrieval across time and is densely interconnected with many subcortical and cortical areas involved in recent and remote memory recall, including the temporal association area (TeA). While the behavioral expression of a memory may remain constant over time, the neural activity mediating memory-guided behavior is dynamic. In PL, different neurons underlie recent and remote memory retrieval and remote memory-encoding neurons have preferential functional connectivity with cortical association areas, including TeA. TeA plays a preferential role in remote compared to recent memory retrieval, yet how TeA circuits drive remote memory retrieval remains poorly understood. Here we used a combination of activity-dependent neuronal tagging, viral circuit mapping and miniscope imaging to investigate the role of the PL-TeA circuit in fear memory retrieval across time in mice. We show that PL memory ensembles recruit PL-TeA neurons across time, and that PL-TeA neurons have enhanced encoding of salient cues and behaviors at remote timepoints. This recruitment depends upon ongoing synaptic activity in the learning-activated PL ensemble. Our results reveal a novel circuit encoding remote memory and provide insight into the principles of memory circuit reorganization across time.

SeminarNeuroscience

Visual objects refine the encoding of head direction

Emilie Macé
University Medical Center Göttingen
Jan 22, 2025
SeminarNeuroscience

Decomposing motivation into value and salience

Philippe Tobler
University of Zurich
Oct 31, 2024

Humans and other animals approach reward and avoid punishment and pay attention to cues predicting these events. Such motivated behavior thus appears to be guided by value, which directs behavior towards or away from positively or negatively valenced outcomes. Moreover, it is facilitated by (top-down) salience, which enhances attention to behaviorally relevant learned cues predicting the occurrence of valenced outcomes. Using human neuroimaging, we recently separated value (ventral striatum, posterior ventromedial prefrontal cortex) from salience (anterior ventromedial cortex, occipital cortex) in the domain of liquid reward and punishment. Moreover, we investigated potential drivers of learned salience: the probability and uncertainty with which valenced and non-valenced outcomes occur. We find that the brain dissociates valenced from non-valenced probability and uncertainty, which indicates that reinforcement matters for the brain, in addition to information provided by probability and uncertainty alone, regardless of valence. Finally, we assessed learning signals (unsigned prediction errors) that may underpin the acquisition of salience. Particularly the insula appears to be central for this function, encoding a subjective salience prediction error, similarly at the time of positively and negatively valenced outcomes. However, it appears to employ domain-specific time constants, leading to stronger salience signals in the aversive than the appetitive domain at the time of cues. These findings explain why previous research associated the insula with both valence-independent salience processing and with preferential encoding of the aversive domain. More generally, the distinction of value and salience appears to provide a useful framework for capturing the neural basis of motivated behavior.

SeminarArtificial IntelligenceRecording

Llama 3.1 Paper: The Llama Family of Models

Vibhu Sapra
Jul 28, 2024

Modern artificial intelligence (AI) systems are powered by foundation models. This paper presents a new set of foundation models, called Llama 3. It is a herd of language models that natively support multilinguality, coding, reasoning, and tool usage. Our largest model is a dense Transformer with 405B parameters and a context window of up to 128K tokens. This paper presents an extensive empirical evaluation of Llama 3. We find that Llama 3 delivers comparable quality to leading language models such as GPT-4 on a plethora of tasks. We publicly release Llama 3, including pre-trained and post-trained versions of the 405B parameter language model and our Llama Guard 3 model for input and output safety. The paper also presents the results of experiments in which we integrate image, video, and speech capabilities into Llama 3 via a compositional approach. We observe this approach performs competitively with the state-of-the-art on image, video, and speech recognition tasks. The resulting models are not yet being broadly released as they are still under development.

SeminarNeuroscience

Probing neural population dynamics with recurrent neural networks

Chethan Pandarinath
Emory University and Georgia Tech
Jun 11, 2024

Large-scale recordings of neural activity are providing new opportunities to study network-level dynamics with unprecedented detail. However, the sheer volume of data and its dynamical complexity are major barriers to uncovering and interpreting these dynamics. I will present latent factor analysis via dynamical systems, a sequential autoencoding approach that enables inference of dynamics from neuronal population spiking activity on single trials and millisecond timescales. I will also discuss recent adaptations of the method to uncover dynamics from neural activity recorded via 2P Calcium imaging. Finally, time permitting, I will mention recent efforts to improve the interpretability of deep-learning based dynamical systems models.

SeminarNeuroscience

Trends in NeuroAI - Brain-like topography in transformers (Topoformer)

Nicholas Blauch
Jun 6, 2024

Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscienceRecording

This decision matters: Sorting out the variables that lead to a single choice

Mathew Diamond
International School for Advanced Studies (SISSA)
Apr 17, 2024
SeminarNeuroscience

Learning representations of specifics and generalities over time

Anna Schapiro
University of Pennsylvania
Apr 11, 2024

There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.

SeminarNeuroscience

How are the epileptogenesis clocks ticking?

Cristina Reschke
RCSI
Apr 9, 2024

The epileptogenesis process is associated with large-scale changes in gene expression, which contribute to the remodelling of brain networks permanently altering excitability. About 80% of the protein coding genes are under the influence of the circadian rhythms. These are 24-hour endogenous rhythms that determine a large number of daily changes in physiology and behavior in our bodies. In the brain, the master clock regulates a large number of pathways that are important during epileptogenesis and established-epilepsy, such as neurotransmission, synaptic homeostasis, inflammation, blood-brain barrier among others. In-depth mapping of the molecular basis of circadian timing in the brain is key for a complete understanding of the cellular and molecular events connecting genes to phenotypes.

SeminarNeuroscience

Stress changes risk-taking by altering Bayesian magnitude coding in parietal cortex

Christian Ruff
University of Zurich, Switzerland
Feb 27, 2024
SeminarNeuroscience

Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)

Mehdi Azabou
Feb 21, 2024

Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscience

Dyslexia, Rhythm, Language and the Developing Brain

Usha Goswami CBE
University of Cambridge
Feb 21, 2024

Recent insights from auditory neuroscience provide a new perspective on how the brain encodes speech. Using these recent insights, I will provide an overview of key factors underpinning individual differences in children’s development of language and phonology, providing a context for exploring atypical reading development (dyslexia). Children with dyslexia are relatively insensitive to acoustic cues related to speech rhythm patterns. This lack of rhythmic sensitivity is related to the atypical neural encoding of rhythm patterns in speech by the brain. I will describe our recent data from infants as well as children, demonstrating developmental continuity in the key neural variables.

SeminarNeuroscienceRecording

Reimagining the neuron as a controller: A novel model for Neuroscience and AI

Dmitri 'Mitya' Chklovskii
Flatiron Institute, Center for Computational Neuroscience
Feb 4, 2024

We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Reese Kneeland
Jan 4, 2024

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Paul Scotti
Dec 6, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812

SeminarNeuroscienceRecording

Neural Mechanisms of Subsecond Temporal Encoding in Primary Visual Cortex

Samuel Post
University of California, Riverside
Nov 28, 2023

Subsecond timing underlies nearly all sensory and motor activities across species and is critical to survival. While subsecond temporal information has been found across cortical and subcortical regions, it is unclear if it is generated locally and intrinsically or if it is a read out of a centralized clock-like mechanism. Indeed, mechanisms of subsecond timing at the circuit level are largely obscure. Primary sensory areas are well-suited to address these question as they have early access to sensory information and provide minimal processing to it: if temporal information is found in these regions, it is likely to be generated intrinsically and locally. We test this hypothesis by training mice to perform an audio-visual temporal pattern sensory discrimination task as we use 2-photon calcium imaging, a technique capable of recording population level activity at single cell resolution, to record activity in primary visual cortex (V1). We have found significant changes in network dynamics through mice’s learning of the task from naive to middle to expert levels. Changes in network dynamics and behavioral performance are well accounted for by an intrinsic model of timing in which the trajectory of q network through high dimensional state space represents temporal sensory information. Conversely, while we found evidence of other temporal encoding models, such as oscillatory activity, we did not find that they accounted for increased performance but were in fact correlated with the intrinsic model itself. These results provide insight into how subsecond temporal information is encoded mechanistically at the circuit level.

SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 20, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarPsychology

Enhancing Qualitative Coding with Large Language Models: Potential and Challenges

Kim Uittenhove & Olivier Mucchiut
AFC Lab / University of Lausanne
Oct 15, 2023

Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.

SeminarNeuroscience

BrainLM Journal Club

Connor Lane
Sep 28, 2023

Connor Lane will lead a journal club on the recent BrainLM preprint, a foundation model for fMRI trained using self-supervised masked autoencoder training. Preprint: https://www.biorxiv.org/content/10.1101/2023.09.12.557460v1 Tweeprint: https://twitter.com/david_van_dijk/status/1702336882301112631?t=Q2-U92-BpJUBh9C35iUbUA&s=19

SeminarNeuroscience

Algonauts 2023 winning paper journal club (fMRI encoding models)

Huzheng Yang, Paul Scotti
Aug 17, 2023

Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf

SeminarNeuroscience

1.8 billion regressions to predict fMRI (journal club)

Mihir Tripathy
Jul 27, 2023

Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.

SeminarNeuroscience

Decoding mental conflict between reward and curiosity in decision-making

Naoki Honda
Hiroshima University
Jul 9, 2023

Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.

SeminarNeuroscience

Quantifying perturbed SynGAP1 function caused by coding mutations

Michael Courtney, PhD
Turku Bioscience
Jun 14, 2023
SeminarNeuroscience

Distinct contributions of different anterior frontal regions to rule-guided decision-making in primates: complementary evidence from lesions, electrophysiology, and neurostimulation

Mark Buckley
Oxford University
May 4, 2023

Different prefrontal areas contribute in distinctly different ways to rule-guided behaviour in the context of a Wisconsin Card Sorting Test (WCST) analog for macaques. For example, causal evidence from circumscribed lesions in NHPs reveals that dorsolateral prefrontal cortex (dlPFC) is necessary to maintain a reinforced abstract rule in working memory, orbitofrontal cortex (OFC) is needed to rapidly update representations of rule value, and the anterior cingulate cortex (ACC) plays a key role in cognitive control and integrating information for correct and incorrect trials over recent outcomes. Moreover, recent lesion studies of frontopolar cortex (FPC) suggest it contributes to representing the relative value of unchosen alternatives, including rules. Yet we do not understand how these functional specializations relate to intrinsic neuronal activities nor the extent to which these neuronal activities differ between different prefrontal regions. After reviewing the aforementioned causal evidence I will present our new data from studies using multi-area multi-electrode recording techniques in NHPs to simultaneously record from four different prefrontal regions implicated in rule-guided behaviour. Multi-electrode micro-arrays (‘Utah arrays’) were chronically implanted in dlPFC, vlPFC, OFC, and FPC of two macaques, allowing us to simultaneously record single and multiunit activity, and local field potential (LFP), from all regions while the monkey performs the WCST analog. Rule-related neuronal activity was widespread in all areas recorded but it differed in degree and in timing between different areas. I will also present preliminary results from decoding analyses applied to rule-related neuronal activities both from individual clusters and also from population measures. These results confirm and help quantify dynamic task-related activities that differ between prefrontal regions. We also found task-related modulation of LFPs within beta and gamma bands in FPC. By combining this correlational recording methods with trial-specific causal interventions (electrical microstimulation) to FPC we could significantly enhance and impair animals performance in distinct task epochs in functionally relevant ways, further consistent with an emerging picture of regional functional specialization within a distributed framework of interacting and interconnected cortical regions.

SeminarNeuroscience

Decoding the hippocampal oscillatory complexity to predict behavior

Romain Goutagny
FunSy Team, Laboratoire de Neurosciences cognitives et Adaptatives, CNRS - Université de Strasbourg
May 3, 2023
SeminarNeuroscienceRecording

A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time

Domenica Bueti
SISSA, Trieste (Italy)
Apr 18, 2023

Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.

SeminarNeuroscience

From spikes to factors: understanding large-scale neural computations

Mark M. Churchland
Columbia University, New York, USA
Apr 5, 2023

It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.

SeminarNeuroscienceRecording

The smart image compression algorithm in the retina: a theoretical study of recoding inputs in neural circuits

Gabrielle Gutierrez
Columbia University, New York
Apr 4, 2023

Computation in neural circuits relies on a common set of motifs, including divergence of common inputs to parallel pathways, convergence of multiple inputs to a single neuron, and nonlinearities that select some signals over others. Convergence and circuit nonlinearities, considered individually, can lead to a loss of information about the inputs. Past work has detailed how to optimize nonlinearities and circuit weights to maximize information, but we show that selective nonlinearities, acting together with divergent and convergent circuit structure, can improve information transmission over a purely linear circuit despite the suboptimality of these components individually. These nonlinearities recode the inputs in a manner that preserves the variance among converged inputs. Our results suggest that neural circuits may be doing better than expected without finely tuned weights.

SeminarNeuroscience

Self-perception: mechanosensation and beyond

Wei Zhang
National Natural Science Foundation of China
Apr 3, 2023

Brain-organ communications play a crucial role in maintaining the body's physiological and psychological homeostasis, and are controlled by complex neural and hormonal systems, including the internal mechanosensory organs. However, the progress has been slow due to technical hurdles: the sensory neurons are deeply buried inside the body and are not readily accessible for direct observation, the projection patterns from different organs or body parts are complex rather than converging into dedicate brain regions, the coding principle cannot be directly adapted from that learned from conventional sensory pathways. Our lab apply the pipeline of "biophysics of receptors-cell biology of neurons-functionality of neural circuits-animal behaviors" to explore the molecular and neural mechanisms of self-perception. In the lab, we mainly focus on the following three questions: 1, The molecular and cellular basis for proprioception and interoception. 2, The circuit mechanisms of sensory coding and integration of internal and external information. 3, The function of interoception in regulating behavior homeostasis.

SeminarNeuroscienceRecording

Behavioural Basis of Subjective Time Distortions

Franklenin Sierra
Max Planck Institute for Empirical Aesthetics, Germany
Mar 28, 2023

Precisely estimating event timing is essential for survival, yet temporal distortions are ubiquitous in our daily sensory experience. Here, we tested whether the relative position, duration, and distance in time of two sequentially-organized events—standard S, with constant duration, and comparison C, with duration varying trial-by-trial—are causal factors in generating temporal distortions. We found that temporal distortions emerge when the first event is shorter than the second event. Importantly, a significant interaction suggests that a longer inter-stimulus interval (ISI) helps to counteract such serial distortion effect only when the constant S is in the first position, but not if the unpredictable C is in the first position. These results imply the existence of a perceptual bias in perceiving ordered event durations, mechanistically contributing to distortion in time perception. Our results clarify the mechanisms generating time distortions by identifying a hitherto unknown duration-dependent encoding inefficiency in human serial temporal perception, something akin to a strong prior that can be overridden for highly predictable sensory events but unfolds for unpredictable ones.

SeminarNeuroscienceRecording

Are place cells just memory cells? Probably yes

Stefano Fusi
Columbia University, New York
Mar 21, 2023

Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.

SeminarNeuroscience

Encoding of dynamic facial expressions in the macaque superior temporal sulcus

Ramona Siebert
Mar 10, 2023
ePoster

Cooperative coding of continuous variables in networks with sparsity constraint

Paul Züge, Raoul-Martin Memmesheimer

Bernstein Conference 2024

ePoster

Cortical feedback shapes high order structure of population activity to improve sensory coding

Augustine(Xiaoran) Yuan, Laura Busse, Wiktor Młynarski

Bernstein Conference 2024

ePoster

Decoding Upper Limb Movements

Marie D. Schmidt, Ioannis Iossifidis

Bernstein Conference 2024

ePoster

Efficient cortical spike train decoding for brain-machine interface implants with recurrent spiking neural networks

Tengjun Liu, Julia Gygax, Julian Rossbroich, Yansong Chua, Shaomin Zhang, Friedemann Zenke

Bernstein Conference 2024

ePoster

Equal contribution of place cells and non-place cells to the position decoding from one-photon imaging calcium transients

Vladislav Ivantaev, Alessio Attardo, Christian Leibold

Bernstein Conference 2024

ePoster

Homeostatic information transmission as a principle for sensory coding during movement

Jonathan Gant, Wiktor Mlynarski

Bernstein Conference 2024

ePoster

Joint coding of stimulus and behavior by flexible adjustments of sensory tuning in primary visual cortex

Julia Mayer, Wiktor Młynarski

Bernstein Conference 2024

ePoster

Modeling competitive memory encoding using a Hopfield network

Julia Pronoza, Sen Cheng

Bernstein Conference 2024

ePoster

Neural Decoding of Temporal Features of Zebra Finch Song

Amirmasoud Ahmadi, Hermina Robotka, Frederic Theunissen, Manfred Gahr

Bernstein Conference 2024

ePoster

Rhythm-structured predictive coding for contextualized speech processing

Olesia Dogonasheva, Denis Zakharov, Anne-Lise Giraud, Boris Gutkin

Bernstein Conference 2024

ePoster

The role of gamma oscillations in stimulus encoding during a sequential memory task in the human Medial Temporal Lobe

Muthu Jeyanthi Prakash, Johannes Niediek, Thomas Reber, Valerie Borger, Rainer Surges, Florian Mormann, Stefanie Liebe

Bernstein Conference 2024

ePoster

The role of multi-neuron temporal spiking patterns on stable encoding of natural movie presentations

Boris Sotomayor, Francesco Battaglia, Martin Vinck

Bernstein Conference 2024

ePoster

Semantic Embodiment: Decoding Action Words through Topographic Neuronal Representation with Brain-Constrained Network

Maxime Carrière, Rosario Tomasello, Friedemann Pulvermüller

Bernstein Conference 2024

ePoster

Stable cortical coding for a dexterous reach-to-grasp task across motor cortical laminae

Elizabeth de Laittre, Jason MacLean

Bernstein Conference 2024

ePoster

Theta-modulated memory encoding and retrieval in recurrent hippocampal circuits

Samuel Eckmann, Yashar Ahmadian, Máté Lengyel

Bernstein Conference 2024

ePoster

Abstract cognitive encoding in the primate superior colliculus

COSYNE 2022

ePoster

Comparable theta phase coding dynamics along the CA1 transverse axis

COSYNE 2022

ePoster

Coarse-to-fine processing drives the efficient coding of natural scenes in mouse visual cortex

COSYNE 2022

ePoster

How coding constraints affect the shape of neural manifolds

COSYNE 2022

ePoster

Differential encoding of innate and learned behaviors in the sensorimotor striatum

COSYNE 2022

ePoster

Differential encoding of temporal context and expectation across the visual hierarchy

COSYNE 2022

ePoster

Efficient Coding of Natural Movies Predicts the Optimal Number of Receptive Field Mosaics

COSYNE 2022

ePoster

Dynamical systems analysis reveals a novel hypothalamic encoding of state in nodes controlling social behavior

COSYNE 2022

ePoster

Encoding of natural movies based on multi-neuron temporal spiking patterns

COSYNE 2022

ePoster

Faithful encoding of interlimb coordination by individual Purkinje cells during locomotion

COSYNE 2022

ePoster

Flexible cue anchoring strategies enable stable head direction coding in blind animals

COSYNE 2022

ePoster

Flexible cue anchoring strategies enable stable head direction coding in blind animals

COSYNE 2022

ePoster

Goal-directed remapping of enthorhinal cortex neural coding

COSYNE 2022

ePoster

Goal-directed remapping of enthorhinal cortex neural coding

COSYNE 2022

ePoster

Joint coding of visual input and eye/head position in V1 of freely moving mice

COSYNE 2022

ePoster

Joint coding of visual input and eye/head position in V1 of freely moving mice

COSYNE 2022

ePoster

Metastable circuit dynamics explains optimal coding of auditory stimuli at moderate arousals

COSYNE 2022

ePoster

Metastable circuit dynamics explains optimal coding of auditory stimuli at moderate arousals

COSYNE 2022

ePoster

Multiscale encodings of memories in hippocampal and artificial networks

COSYNE 2022

ePoster

Multiscale encodings of memories in hippocampal and artificial networks

COSYNE 2022

ePoster

Novelty modulates neural coding and reveals functional diversity within excitatory and inhibitory populations in the visual cortex

COSYNE 2022

ePoster

Object × position coding in the entorhinal cortex of flying bats

COSYNE 2022

ePoster

Object × position coding in the entorhinal cortex of flying bats

COSYNE 2022

ePoster

Novelty modulates neural coding and reveals functional diversity within excitatory and inhibitory populations in the visual cortex

COSYNE 2022

ePoster

Brain-wide manifold-organized hierarchical encoding of behaviors in C. elegans

Charles Fieseler, Itamar Lev, Ulises Rey, Lukas Hille, Hannah Brenner, Manuel Zimmer

Bernstein Conference 2024