Decoding
decoding
Decoding stress vulnerability
Although stress can be considered as an ongoing process that helps an organism to cope with present and future challenges, when it is too intense or uncontrollable, it can lead to adverse consequences for physical and mental health. Social stress specifically, is a highly prevalent traumatic experience, present in multiple contexts, such as war, bullying and interpersonal violence, and it has been linked with increased risk for major depression and anxiety disorders. Nevertheless, not all individuals exposed to strong stressful events develop psychopathology, with the mechanisms of resilience and vulnerability being still under investigation. During this talk, I will identify key gaps in our knowledge about stress vulnerability and I will present our recent data from our contextual fear learning protocol based on social defeat stress in mice.
Jorge Almeida
The Proaction Laboratory (proactionlab.fpce.uc.pt) at the University Coimbra Portugal is looking for Researchers at initial stages (post-PhD) of their career to be part of the lab in a joint competitive application to a Portuguese Science Foundation (FCT) independent researcher call. We particularly encourage applications from women, and from underrepresented groups in academia. The applicant and the lab will work on a competitive project to be submitted. Results from the application are expected to be out mid 2025. The application will be open September 30, and will close November 29, 2024. The positions are as independent researchers in the Proaction Lab, are for 3 years, and the salary is the same as the Portuguese payroll for University Professors (net values for junior or assistant positions, for instance are approximately 1700 or 2100 euros net-value per month in a 14-month salary per year; these are competitive salaries for the cost of living in Portugal and especially in Coimbra). The Proaction Lab is currently very well funded as we have a set of on-going funded projects including a Starting Grant ERC to Jorge Almeida, a major European ERA Chair project to Jorge ALmeida and Alfonso Caramzza, and other projects. We have access to a 3T MRI scanner with a 32-channel coil, to tDCS, and to a fully set psychophysics lab. We have a 256 ch EEG, motion tracking and eyetracking on site. We also have a science communication office dedicated to the lab. Finally, the University of Coimbra is a 700 year old University and has been selected as a UNESCO world Heritage site. Coimbra is one of the most lively university cities in the world, and it is a beautiful city with easy access to the beach and mountain. You should apply as soon as you can - the sooner the better so that we can prepare the application. If interested send an email to jorgecbalmeida@gmail.com, with a CV, and motivation/scientific proposal letter. If there is a fit, we will jointly apply to these positions – we have had in past applications a high success rate as a lab (in four previous editions, we got several applications that were offered a position).
Jorge Almeida
The Proaction Laboratory (proactionlab.fpce.uc.pt) at the University Coimbra Portugal is looking for Researchers at initial stages (post-PhD) of their career to be part of the lab in a joint competitive application to a Portuguese Science Foundation (FCT) independent researcher call. We particularly encourage applications from women, and from underrepresented groups in academia. The applicants should have obtained a PhD, and have an interest in cognitive neuroscience, vision science and preferably (but not limited to) object recognition, shape processing, and texture and surface processing. We are particularly interested in motivated and independent Researchers addressing these topics with strong expertise in fMRI (in particular decoding and multivariate approaches). Good programming skills, great communication and mentoring skills, and a great command of English are a plus. The applicant and the lab will work on a competitive project to be submitted. Results from the application are expected to be out mid 2025. The application will be open September 30, and will close November 29, 2024. The positions are as independent researchers in the Proaction Lab, are for 3 years, and the salary is the same as the Portuguese payroll for University Professors (net values for junior or assistant positions, for instance are approximately 1700 or 2100 euros net-value per month in a 14-month salary per year; these are competitive salaries for the cost of living in Portugal and especially in Coimbra). The Proaction Lab is currently very well funded as we have a set of on-going funded projects including a Starting Grant ERC to Jorge Almeida, a major European ERA Chair project to Jorge Almeida and Alfonso Caramazza, and other projects. We have access to a 3T MRI scanner with a 32-channel coil, to tDCS, and to a fully set psychophysics lab. We have a 256 ch EEG, motion tracking and eye-tracking on site. We also have a science communication office dedicated to the lab. Finally, the University of Coimbra is a 700-year old University and has been selected as a UNESCO world Heritage site. Coimbra is one of the liveliest university cities in the world, and it is a beautiful city with easy access to the beach and mountain. You should apply as soon as you can - the sooner the better so that we can prepare the application. If interested send an email to jorgecbalmeida@gmail.com, with a CV, and motivation/scientific proposal letter. If there is a fit, we will jointly apply to these positions – we have had in past applications a high success rate as a lab (in four previous editions, we got several applications that were offered a position).
Jorge Almeida
The Proaction Laboratory at the University Coimbra Portugal is looking for Researchers at initial stages (post-PhD) of their career to be part of the lab in a joint competitive application to a Portuguese Science Foundation (FCT) independent researcher call. The positions are as independent researchers in the Proaction Lab, are for 3 years, and the salary is the same as the Portuguese payroll for University Professors (net values for junior or assistant positions, for instance are approximately 1700 or 2100 euros net-value per month in a 14-month salary per year; these are competitive salaries for the cost of living in Portugal and especially in Coimbra). The Proaction Lab is currently very well funded as we have a set of on-going funded projects including a Starting Grant ERC to Jorge Almeida, a major European ERA Chair project to Jorge Almeida and Alfonso Caramzza, and other projects. We have access to a 3T MRI scanner with a 32-channel coil, to tDCS, and to a fully set psychophysics lab. We have a 256 ch EEG, motion tracking and eyetracking on site. We also have a science communication office dedicated to the lab. Finally, the University of Coimbra is a 700 year old University and has been selected as a UNESCO world Heritage site. Coimbra is one of the most lively university cities in the world, and it is a beautiful city with easy access to the beach and mountain.
Jorge Almeida
I am looking for a Post-Doctoral Researcher at the initial stages (post-PhD and no more than 2 years and half after obtaining their PhD). The applicants should have obtained a PhD, and have an overall interest in object recognition, potentially focusing on object-related features like shape, texture material and surface properties, and/or object-related action. I am particularly interested in researchers with strong expertise in fMRI, and in particular decoding and multivariate approaches. Good programming skills, great communication and mentoring skills, and a great command of English are a plus. The selected applicant will work with me (Jorge Almeida) but will also benefit from the lively academic environment and research groups we are currently building in the Psychology Department of the University of Coimbra, Portugal. The projects will relate to my work on object and mid-level processing. The position is for 2 to 3 years, and the salary is the standard Post-Doctoral pay-scale in Portugal (net value 1800 euros per month; this is a competitive salary for the cost of living in Portugal and especially in Coimbra). Start time should be as soon as possible. The Proaction Lab is currently well-funded as we have a set of on-going funded projects including a Starting Grant ERC to Jorge Almeida, a major European ERA Chair project to Jorge Almeida and Alfonso Caramazza, and other projects. We have access to a 3T MRI scanner with a 32-channel coil, to a 7T scanner (in collaboration with a site outside of Portugal), to tDCS, and to a fully set psychophysics lab. We have a 256 ch EEG, motion tracking and eyetracking on site. We also have a science communication office dedicated to the lab. Finally, the University of Coimbra is a 700-year-old University and has been selected as a UNESCO world Heritage site. Coimbra is one of the liveliest university cities in the world, and it is a beautiful city with easy access to the beach and mountain.
Dr. Fleur Zeldenrust
For the Vidi project ‘Top-down neuromodulation and bottom-up network computation,’ we seek a postdoc to study neuromodulators in efficient spike-coding networks. Using our lab’s data on dopamine, acetylcholine, and serotonin from the mouse barrel cortex, you’ll derive models connecting single cells, networks, and behavior. The aim of this project is to explain the effects of neuromodulation on task performance in biologically realistic spiking recurrent neural networks (SRNNs). You will use the efficient spike coding framework, in which a network is not trained by a learning paradigm but deduced using mathematically rigorous rules that enforce efficient coding (i.e. maximally informative spikes). You will study how the network’s structural properties such as neural heterogeneity influence decoding performance and efficiency. You will incorporate realistic network properties of the (barrel) cortex based on our lab’s measurements and incorporate the cellular effects of dopamine, acetylcholine and serotonin we have measured over the past years into the network, to investigate their effects on representations, network activity measures such as dimensionality, and decoding performance. You will build on the single cell data, network models and analysis methods available in our group, and your results will be incorporated into our group’s further research to develop and validate efficient coding models of (somatosensory) perception. Therefore, we are looking for a team player who is willing to learn from the other group members and to share their knowledge with them.
Memory Decoding Journal Club: Functional connectomics reveals general wiring rule in mouse visual cortex
Functional connectomics reveals general wiring rule in mouse visual cortex
Memory Decoding Journal Club: "Connectomic traces of Hebbian plasticity in the entorhinalhippocampal system
Connectomic traces of Hebbian plasticity in the entorhinalhippocampal system
Memory Decoding Journal Club: Distinct synaptic plasticity rules operate across dendritic compartments in vivo during learning
Distinct synaptic plasticity rules operate across dendritic compartments in vivo during learning
Memory Decoding Journal Club: A combinatorial neural code for long-term motor memory
A combinatorial neural code for long-term motor memory
Memory Decoding Journal Club: Behavioral time scale synaptic plasticity underlies CA1 place fields
Behavioral time scale synaptic plasticity underlies CA1 place fields
Memory Decoding Journal Club: "Connectomic reconstruction of a cortical column" cortical column
Connectomic reconstruction of a cortical column
Memory Decoding Journal Club: Neocortical synaptic engrams for remote contextual memories
Neocortical synaptic engrams for remote contextual memories
Memory Decoding Journal Club: "Structure and function of the hippocampal CA3 module
Structure and function of the hippocampal CA3 module
Memory Decoding Journal Club: "Synaptic architecture of a memory engram in the mouse hippocampus
Synaptic architecture of a memory engram in the mouse hippocampus
Motor learning selectively strengthens cortical and striatal synapses of motor engram neurons
Join Us for the Memory Decoding Journal Club! A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience. This time, we’re diving into a groundbreaking paper: "Motor learning selectively strengthens cortical and striatal synapses of motor engram neurons
Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala
Fear learning induces synaptic potentiation between engram neurons in the rat lateral amygdala. This study by Marios Abatis et al. demonstrates how fear conditioning strengthens synaptic connections between engram cells in the lateral amygdala, revealed through optogenetic identification of neuronal ensembles and electrophysiological measurements. The work provides crucial insights into memory formation mechanisms at the synaptic level, with implications for understanding anxiety disorders and developing targeted interventions. Presented by Dr. Kenneth Hayworth, this journal club will explore the paper's methodology linking engram cell reactivation with synaptic plasticity measurements, and discuss implications for memory decoding research.
Memory Decoding Journal Club: Reconstructing a new hippocampal engram for systems reconsolidation and remote memory updating
Join us for the Memory Decoding Journal Club, a collaboration between the Carboncopies Foundation and BPF Aspirational Neuroscience. This month, we're diving into a groundbreaking paper: 'Reconstructing a new hippocampal engram for systems reconsolidation and remote memory updating' by Bo Lei, Bilin Kang, Yuejun Hao, Haoyu Yang, Zihan Zhong, Zihan Zhai, and Yi Zhong from Tsinghua University, Beijing Academy of Artificial Intelligence, IDG/McGovern Institute of Brain Research, and Peking Union Medical College. Dr. Randal Koene will guide us through an engaging discussion on these exciting findings and their implications for neuroscience and memory research.
Decoding ketamine: Neurobiological mechanisms underlying its rapid antidepressant efficacy
Unlike traditional monoamine-based antidepressants that require weeks to exert effects, ketamine alleviates depression within hours, though its clinical use is limited by side effects. While ketamine was initially thought to work primarily through NMDA receptor (NMDAR) inhibition, our research reveals a more complex mechanism. We demonstrate that NMDAR inhibition alone cannot explain ketamine's sustained antidepressant effects, as other NMDAR antagonists like MK-801 lack similar efficacy. Instead, the (2R,6R)-hydroxynorketamine (HNK) metabolite appears critical, exhibiting antidepressant effects without ketamine's side effects. Paradoxically, our findings suggest an inverted U-shaped dose-response relationship where excessive NMDAR inhibition may actually impede antidepressant efficacy, while some level of NMDAR activation is necessary. The antidepressant actions of ketamine and (2R,6R)-HNK require AMPA receptor activation, leading to synaptic potentiation and upregulation of AMPA receptor subunits GluA1 and GluA2. Furthermore, NMDAR subunit GluN2A appears necessary and possibly sufficient for these effects. This research establishes NMDAR-GluN2A activation as a common downstream effector for rapid-acting antidepressants, regardless of their initial targets, offering promising directions for developing next-generation antidepressants with improved efficacy and reduced side effects.
This decision matters: Sorting out the variables that lead to a single choice
Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)
Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705
Trends in NeuroAI - Meta's MEG-to-image reconstruction
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812
Decoding mental conflict between reward and curiosity in decision-making
Humans and animals are not always rational. They not only rationally exploit rewards but also explore an environment owing to their curiosity. However, the mechanism of such curiosity-driven irrational behavior is largely unknown. Here, we developed a decision-making model for a two-choice task based on the free energy principle, which is a theory integrating recognition and action selection. The model describes irrational behaviors depending on the curiosity level. We also proposed a machine learning method to decode temporal curiosity from behavioral data. By applying it to rat behavioral data, we found that the rat had negative curiosity, reflecting conservative selection sticking to more certain options and that the level of curiosity was upregulated by the expected future information obtained from an uncertain environment. Our decoding approach can be a fundamental tool for identifying the neural basis for reward–curiosity conflicts. Furthermore, it could be effective in diagnosing mental disorders.
Distinct contributions of different anterior frontal regions to rule-guided decision-making in primates: complementary evidence from lesions, electrophysiology, and neurostimulation
Different prefrontal areas contribute in distinctly different ways to rule-guided behaviour in the context of a Wisconsin Card Sorting Test (WCST) analog for macaques. For example, causal evidence from circumscribed lesions in NHPs reveals that dorsolateral prefrontal cortex (dlPFC) is necessary to maintain a reinforced abstract rule in working memory, orbitofrontal cortex (OFC) is needed to rapidly update representations of rule value, and the anterior cingulate cortex (ACC) plays a key role in cognitive control and integrating information for correct and incorrect trials over recent outcomes. Moreover, recent lesion studies of frontopolar cortex (FPC) suggest it contributes to representing the relative value of unchosen alternatives, including rules. Yet we do not understand how these functional specializations relate to intrinsic neuronal activities nor the extent to which these neuronal activities differ between different prefrontal regions. After reviewing the aforementioned causal evidence I will present our new data from studies using multi-area multi-electrode recording techniques in NHPs to simultaneously record from four different prefrontal regions implicated in rule-guided behaviour. Multi-electrode micro-arrays (‘Utah arrays’) were chronically implanted in dlPFC, vlPFC, OFC, and FPC of two macaques, allowing us to simultaneously record single and multiunit activity, and local field potential (LFP), from all regions while the monkey performs the WCST analog. Rule-related neuronal activity was widespread in all areas recorded but it differed in degree and in timing between different areas. I will also present preliminary results from decoding analyses applied to rule-related neuronal activities both from individual clusters and also from population measures. These results confirm and help quantify dynamic task-related activities that differ between prefrontal regions. We also found task-related modulation of LFPs within beta and gamma bands in FPC. By combining this correlational recording methods with trial-specific causal interventions (electrical microstimulation) to FPC we could significantly enhance and impair animals performance in distinct task epochs in functionally relevant ways, further consistent with an emerging picture of regional functional specialization within a distributed framework of interacting and interconnected cortical regions.
Decoding the hippocampal oscillatory complexity to predict behavior
From spikes to factors: understanding large-scale neural computations
It is widely accepted that human cognition is the product of spiking neurons. Yet even for basic cognitive functions, such as the ability to make decisions or prepare and execute a voluntary movement, the gap between spikes and computation is vast. Only for very simple circuits and reflexes can one explain computations neuron-by-neuron and spike-by-spike. This approach becomes infeasible when neurons are numerous the flow of information is recurrent. To understand computation, one thus requires appropriate abstractions. An increasingly common abstraction is the neural ‘factor’. Factors are central to many explanations in systems neuroscience. Factors provide a framework for describing computational mechanism, and offer a bridge between data and concrete models. Yet there remains some discomfort with this abstraction, and with any attempt to provide mechanistic explanations above that of spikes, neurons, cell-types, and other comfortingly concrete entities. I will explain why, for many networks of spiking neurons, factors are not only a well-defined abstraction, but are critical to understanding computation mechanistically. Indeed, factors are as real as other abstractions we now accept: pressure, temperature, conductance, and even the action potential itself. I use recent empirical results to illustrate how factor-based hypotheses have become essential to the forming and testing of scientific hypotheses. I will also show how embracing factor-level descriptions affords remarkable power when decoding neural activity for neural engineering purposes.
Decoding rapidly presented visual stimuli from prefrontal ensembles without report nor post-perceptual processing
Decoding Natural Social Interactions from Neuronal Population Activity in Primates
Exploring emotion in the expression of ape gesture
Language appears to be the most complex system of animal communication described to date. However, its precursors were present in the communication of our evolutionary ancestors and are likely shared by our modern ape cousins. All great apes, including humans, employ a rich repertoire of vocalizations, facial expressions, and gestures. Great ape gestural repertoires are particularly elaborate, with ape species employing over 80 different gesture types intentionally: that is towards a recipient with a specific goal in mind. Intentional usage allows us to ask not only what information is encoded in ape gestures, but what do apes mean when they use them. I will discuss recent research on ape gesture, on how we approach the question of decoding meaning, and how with new methods we are starting to integrate long overlooked aspects of ape gesture such as group and individual variation, and expression and emotion into our study of these signals.
Efficient Random Codes in a Shallow Neural Network
Efficient coding has served as a guiding principle in understanding the neural code. To date, however, it has been explored mainly in the context of peripheral sensory cells with simple tuning curves. By contrast, ‘deeper’ neurons such as grid cells come with more complex tuning properties which imply a different, yet highly efficient, strategy for representing information. I will show that a highly efficient code is not specific to a population of neurons with finely tuned response properties: it emerges robustly in a shallow network with random synapses. Here, the geometry of population responses implies that optimality obtains from a tradeoff between two qualitatively different types of error: ‘local’ errors (common to classical neural population codes) and ‘global’ (or ‘catastrophic’) errors. This tradeoff leads to efficient compression of information from a high-dimensional representation to a low-dimensional one. After describing the theoretical framework, I will use it to re-interpret recordings of motor cortex in behaving monkey. Our framework addresses the encoding of (sensory) information; if time allows, I will comment on ongoing work that focuses on decoding from the perspective of efficient coding.
Adaptive neural network classifier for decoding finger movements
While non-invasive Brain-to-Computer interface can accurately classify the lateralization of hand moments, the distinction of fingers activation in the same hand is limited by their local and overlapping representation in the motor cortex. In particular, the low signal-to-noise ratio restrains the opportunity to identify meaningful patterns in a supervised fashion. Here we combined Magnetoencephalography (MEG) recordings with advanced decoding strategy to classify finger movements at single trial level. We recorded eight subjects performing a serial reaction time task, where they pressed four buttons with left and right index and middle fingers. We evaluated the classification performance of hand and finger movements with increasingly complex approaches: supervised common spatial patterns and logistic regression (CSP + LR) and unsupervised linear finite convolutional neural network (LF-CNN). The right vs left fingers classification performance was accurate above 90% for all methods. However, the classification of the single finger provided the following accuracy: CSP+SVM : – 68 ± 7%, LF-CNN : 71 ± 10%. CNN methods allowed the inspection of spatial and spectral patterns, which reflected activity in the motor cortex in the theta and alpha ranges. Thus, we have shown that the use of CNN in decoding MEG single trials with low signal to noise ratio is a promising approach that, in turn, could be extended to a manifold of problems in clinical and cognitive neuroscience.
From single cell to population coding during defensive behaviors in prefrontal circuits
Coping with threatening situations requires both identifying stimuli predicting danger and selecting adaptive behavioral responses in order to survive. The dorso medial prefrontal cortex (dmPFC) is a critical structure involved in the regulation of threat-related behaviour, yet it is still largely unclear how threat-predicting stimuli and defensive behaviours are associated within prefrontal networks in order to successfully drive adaptive responses. Over the past years, we used a combination we used a combination of extracellular recordings, neuronal decoding approaches, and state of the art optogenetic manipulations to identify key neuronal elements and mechanisms controlling defensive fear responses. I will present an overview of our recent work ranging from analyses of dedicated neuronal types and oscillatory and synchronization mechanisms to artificial intelligence approaches used to decode the activity or large population of neurons. Ultimately these analyses allowed the identification of high dimensional representations of defensive behavior unfolding within prefrontal networks.
Decoding sounds in early visual cortex of sighted and blind individuals
NMC4 Keynote:
The brain represents the external world through the bottleneck of sensory organs. The network of hierarchically organized neurons is thought to recover the causes of sensory inputs to reconstruct the reality in the brain in idiosyncratic ways depending on individuals and their internal states. How can we understand the world model represented in an individual’s brain, or the neuroverse? My lab has been working on brain decoding of visual perception and subjective experiences such as imagery and dreaming using machine learning and deep neural network representations. In this talk, I will outline the progress of brain decoding methods and present how subjective experiences are externalized as images and how they could be shared across individuals via neural code conversion. The prospects of these approaches in basic science and neurotechnology will be discussed.
NMC4 Short Talk: Decoding finger movements from human posterior parietal cortex
Restoring hand function is a top priority for individuals with tetraplegia. This challenge motivates considerable research on brain-computer interfaces (BCIs), which bypass damaged neural pathways to control paralyzed or prosthetic limbs. Here, we demonstrate the BCI control of a prosthetic hand using intracortical recordings from the posterior parietal cortex (PPC). As part of an ongoing clinical trial, two participants with cervical spinal cord injury were each implanted with a 96-channel array in the left PPC. Across four sessions each, we recorded neural activity while they attempted to press individual fingers of the contralateral (right) hand. Single neurons modulated selectively for different finger movements. Offline, we accurately classified finger movements from neural firing rates using linear discriminant analysis (LDA) with cross-validation (accuracy = 90%; chance = 17%). Finally, the participants used the neural classifier online to control all five fingers of a BCI hand. Online control accuracy (86%; chance = 17%) exceeded previous state-of-the-art finger BCIs. Furthermore, offline, we could classify both flexion and extension of the right fingers, as well as flexion of all ten fingers. Our results indicate that neural recordings from PPC can be used to control prosthetic fingers, which may help contribute to a hand restoration strategy for people with tetraplegia.
“Mind reading” with brain scanners: Facts versus science fiction
Every thought is associated with a unique pattern of brain activity. Thus, in principle, it should be possible to use these activity patterns as "brain fingerprints" for different thoughts and to read out what a person is thinking based on their brain activity alone. Indeed, using machine learning considerable progress has been made in such "brainreading" in recent years. It is now possible to decode which image a person is viewing, which film sequence they are watching, which emotional state they are in or which intentions they hold in mind. This talk will provide an overview of the current state of the art in brain reading. It will also highlight the main challenges and limitations of this research field. For example, mathematical models are needed to cope with the high dimensionality of potential mental states. Furthermore, the ethical concerns raised by (often premature) commercial applications of brain reading will also be discussed.
Adaptive bottleneck to pallium for sequence memory, path integration and mixed selectivity representation
Spike-driven adaptation involves intracellular mechanisms that are initiated by neural firing and lead to the subsequent reduction of spiking rate followed by a recovery back to baseline. We report on long (>0.5 second) recovery times from adaptation in a thalamic-like structure in weakly electric fish. This adaptation process is shown via modeling and experiment to encode in a spatially invariant manner the time intervals between event encounters, e.g. with landmarks as the animal learns the location of food. These cells also come in two varieties, ones that care only about the time since the last encounter, and others that care about the history of encounters. We discuss how the two populations can share in the task of representing sequences of events, supporting path integration and converting from ego-to-allocentric representations. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus. Finally we discuss how all the cells of this gateway to the pallium exhibit mixed selectivity of social features of their environment. The data and computational modeling further reveal that, in contrast to a long-held belief, these gymnotiform fish are endowed with a corollary discharge, albeit only for social signalling.
Adaptation-driven sensory detection and sequence memory
Spike-driven adaptation involves intracellular mechanisms that are initiated by spiking and lead to the subsequent reduction of spiking rate. One of its consequences is the temporal patterning of spike trains, as it imparts serial correlations between interspike intervals in baseline activity. Surprisingly the hidden adaptation states that lead to these correlations themselves exhibit quasi-independence. This talk will first discuss recent findings about the role of such adaptation in suppressing noise and extending sensory detection to weak stimuli that leave the firing rate unchanged. Further, a matching of the post-synaptic responses to the pre-synaptic adaptation time scale enables a recovery of the quasi-independence property, and can explain observations of correlations between post-synaptic EPSPs and behavioural detection thresholds. We then consider the involvement of spike-driven adaptation in the representation of intervals between sensory events. We discuss the possible link of this time-stamping mechanism to the conversion of egocentric to allocentric coordinates. The heterogeneity of the population parameters enables the representation and Bayesian decoding of time sequences of events which may be put to good use in path integration and hilus neuron function in hippocampus.
Encoding and perceiving the texture of sounds: auditory midbrain codes for recognizing and categorizing auditory texture and for listening in noise
Natural soundscapes such as from a forest, a busy restaurant, or a busy intersection are generally composed of a cacophony of sounds that the brain needs to interpret either independently or collectively. In certain instances sounds - such as from moving cars, sirens, and people talking - are perceived in unison and are recognized collectively as single sound (e.g., city noise). In other instances, such as for the cocktail party problem, multiple sounds compete for attention so that the surrounding background noise (e.g., speech babble) interferes with the perception of a single sound source (e.g., a single talker). I will describe results from my lab on the perception and neural representation of auditory textures. Textures, such as a from a babbling brook, restaurant noise, or speech babble are stationary sounds consisting of multiple independent sound sources that can be quantitatively defined by summary statistics of an auditory model (McDermott & Simoncelli 2011). How and where in the auditory system are summary statistics represented and the neural codes that potentially contribute towards their perception, however, are largely unknown. Using high-density multi-channel recordings from the auditory midbrain of unanesthetized rabbits and complementary perceptual studies on human listeners, I will first describe neural and perceptual strategies for encoding and perceiving auditory textures. I will demonstrate how distinct statistics of sounds, including the sound spectrum and high-order statistics related to the temporal and spectral correlation structure of sounds, contribute to texture perception and are reflected in neural activity. Using decoding methods I will then demonstrate how various low and high-order neural response statistics can differentially contribute towards a variety of auditory tasks including texture recognition, discrimination, and categorization. Finally, I will show examples from our recent studies on how high-order sound statistics and accompanying neural activity underlie difficulties for recognizing speech in background noise.
Brain-Machine Interfaces: Beyond Decoding
A brain-machine interface (BMI) is a system that enables users to interact with computers and robots through the voluntary modulation of their brain activity. Such a BMI is particularly relevant as an aid for patients with severe neuromuscular disabilities, although it also opens up new possibilities in human-machine interaction for able-bodied people. Real-time signal processing and decoding of brain signals are certainly at the heart of a BMI. Yet, this does not suffice for subjects to operate a brain-controlled device. In the first part of my talk I will review some of our recent studies, most involving participants with severe motor disabilities, that illustrate additional principles of a reliable BMI that enable users to operate different devices. In particular, I will show how an exclusive focus on machine learning is not necessarily the solution as it may not promote subject learning. This highlights the need for a comprehensive mutual learning methodology that foster learning at the three critical levels of the machine, subject and application. To further illustrate that BMI is more than just decoding, I will discuss how to enhance subject learning and BMI performance through appropriate feedback modalities. Finally, I will show how these principles translate to motor rehabilitation, where in a controlled trial chronic stroke patients achieved a significant functional recovery after the intervention, which was retained 6-12 months after the end of therapy.
Characterising the brain representations behind variations in real-world visual behaviour
Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.
Brain Decoding: Pathways to progress and potential pitfalls for understanding the neural basis of consciousness
Dynamical population coding during defensive behaviours in prefrontal circuits
Coping with threatening situations requires both identifying stimuli predicting danger and selecting adaptive behavioral responses in order to survive. The dorso medial prefrontal cortex (dmPFC) is a critical structure involved in the regulation of threat-related behaviour, yet it is still largely unclear how threat-predicting stimuli and defensive behaviours are associated within prefrontal networks in order to successfully drive adaptive responses. To address these questions, we used a combination of extracellular recordings, neuronal decoding approaches, and optogenetic manipulations to show that threat representations and the initiation of avoidance behaviour are dynamically encoded in the overall population activity of dmPFC neurons. These data indicate that although dmPFC population activity at stimulus onset encodes sustained threat representations and discriminates threat- from non-threat cues, it does not predict action outcome. In contrast, transient dmPFC population activity prior to action initiation reliably predicts avoided from non-avoided trials. Accordingly, optogenetic inhibition of prefrontal activity critically constrained the selection of adaptive defensive responses in a time-dependent manner. These results reveal that the adaptive selection of active fear responses relies on a dynamic process of information linking threats with defensive actions unfolding within prefrontal networks.
Imaging memory consolidation in wakefulness and sleep
New memories are initially labile and have to be consolidated into stable long-term representations. Current theories assume that this is supported by a shift in the neural substrate that supports the memory, away from rapidly plastic hippocampal networks towards more stable representations in the neocortex. Rehearsal, i.e. repeated activation of the neural circuits that store a memory, is thought to crucially contribute to the formation of neocortical long-term memory representations. This may either be achieved by repeated study during wakefulness or by a covert reactivation of memory traces during offline periods, such as quiet rest or sleep. My research investigates memory consolidation in the human brain with multivariate decoding of neural processing and non-invasive in-vivo imaging of microstructural plasticity. Using pattern classification on recordings of electrical brain activity, I show that we spontaneously reprocess memories during offline periods in both sleep and wakefulness, and that this reactivation benefits memory retention. In related work, we demonstrate that active rehearsal of learning material during wakefulness can facilitate rapid systems consolidation, leading to an immediate formation of lasting memory engrams in the neocortex. These representations satisfy general mnemonic criteria and cannot only be imaged with fMRI while memories are actively processed but can also be observed with diffusion-weighted imaging when the traces lie dormant. Importantly, sleep seems to hold a crucial role in stabilizing the changes in the contribution of memory systems initiated by rehearsal during wakefulness, indicating that online and offline reactivation might jointly contribute to forming long-term memories. Characterizing the covert processes that decide whether, and in which ways, our brains store new information is crucial to our understanding of memory formation. Directly imaging consolidation thus opens great opportunities for memory research.
Neural mechanisms of active vision in the marmoset monkey
Human vision relies on rapid eye movements (saccades) 2-3 times every second to bring peripheral targets to central foveal vision for high resolution inspection. This rapid sampling of the world defines the perception-action cycle of natural vision and profoundly impacts our perception. Marmosets have similar visual processing and eye movements as humans, including a fovea that supports high-acuity central vision. Here, I present a novel approach developed in my laboratory for investigating the neural mechanisms of visual processing using naturalistic free viewing and simple target foraging paradigms. First, we establish that it is possible to map receptive fields in the marmoset with high precision in visual areas V1 and MT without constraints on fixation of the eyes. Instead, we use an off-line correction for eye position during foraging combined with high resolution eye tracking. This approach allows us to simultaneously map receptive fields, even at the precision of foveal V1 neurons, while also assessing the impact of eye movements on the visual information encoded. We find that the visual information encoded by neurons varies dramatically across the saccade to fixation cycle, with most information localized to brief post-saccadic transients. In a second study we examined if target selection prior to saccades can predictively influence how foveal visual information is subsequently processed in post-saccadic transients. Because every saccade brings a target to the fovea for detailed inspection, we hypothesized that predictive mechanisms might prime foveal populations to process the target. Using neural decoding from laminar arrays placed in foveal regions of area MT, we find that the direction of motion for a fixated target can be predictively read out from foveal activity even before its post-saccadic arrival. These findings highlight the dynamic and predictive nature of visual processing during eye movements and the utility of the marmoset as a model of active vision. Funding sources: NIH EY030998 to JM, Life Sciences Fellowship to JY
Decoding the neural processing of speech
Understanding speech in noisy backgrounds requires selective attention to a particular speaker. Humans excel at this challenging task, while current speech recognition technology still struggles when background noise is loud. The neural mechanisms by which we process speech remain, however, poorly understood, not least due to the complexity of natural speech. Here we describe recent progress obtained through applying machine-learning to neuroimaging data of humans listening to speech in different types of background noise. In particular, we develop statistical models to relate characteristic features of speech such as pitch, amplitude fluctuations and linguistic surprisal to neural measurements. We find neural correlates of speech processing both at the subcortical level, related to the pitch, as well as at the cortical level, related to amplitude fluctuations and linguistic structures. We also show that some of these measures allow to diagnose disorders of consciousness. Our findings may be applied in smart hearing aids that automatically adjust speech processing to assist a user, as well as in the diagnosis of brain disorders.
Robust Encoding of Abstract Rules by Distinct Neuronal Populations in Primate Visual Cortex
I will discuss our recent evidence showing that information about abstract rules can be decoded from neuronal activity in primate visual cortex even in the absence of sensory stimulation. Furthermore, that rule information is greatest among neurons with the least visual activity and the weakest coupling to local neuronal networks. In addition, I will talk about recent developments in large-scale neurophysiological techniques in nonhuman primates.
Do deep learning latent spaces resemble human brain representations?
In recent years, artificial neural networks have demonstrated human-like or super-human performance in many tasks including image or speech recognition, natural language processing (NLP), playing Go, chess, poker and video-games. One remarkable feature of the resulting models is that they can develop very intuitive latent representations of their inputs. In these latent spaces, simple linear operations tend to give meaningful results, as in the well-known analogy QUEEN-WOMAN+MAN=KING. We postulate that human brain representations share essential properties with these deep learning latent spaces. To verify this, we test whether artificial latent spaces can serve as a good model for decoding brain activity. We report improvements over state-of-the-art performance for reconstructing seen and imagined face images from fMRI brain activation patterns, using the latent space of a GAN (Generative Adversarial Network) model coupled with a Variational AutoEncoder (VAE). With another GAN model (BigBiGAN), we can decode and reconstruct natural scenes of any category from the corresponding brain activity. Our results suggest that deep learning can produce high-level representations approaching those found in the human brain. Finally, I will discuss whether these deep learning latent spaces could be relevant to the study of consciousness.
Decoding autism: from genetics to mechanisms
Experience-dependent remapping of temporal encoding by striatal ensembles
Medium-spiny neurons (MSNs) in the striatum are required for interval timing, or the estimation of the time over several seconds via a motor response. We and others have shown that striatal MSNs can encode the duration of temporal intervals via time-dependent ramping activity, progressive monotonic changes in firing rate preceding behaviorally salient points in time. Here, we investigated how timing-related activity within striatal ensembles changes with experience. We leveraged a rodent-optimized interval timing task in which mice ‘switch’ response ports after an amount of time has passed without reward. We report three main results. First, we found that the proportion of MSNs exhibiting time-dependent modulations of firing rate increased after 10 days of task overtraining. Second, temporal decoding by MSN ensembles increased with experience and was largely driven by time-related ramping activity. Finally, we found that time-related ramping activity generalized across both correct and error trials. These results enhance our understanding of striatal temporal processing by demonstrating that time-dependent activity within MSN ensembles evolves with experience and is dissociable from motor- and reward-related processes.
Decoding Mosquito Attraction to Human Scent
High precision coding in visual cortex
Individual neurons in visual cortex provide the brain with unreliable estimates of visual features. It is not known if the single-neuron variability is correlated across large neural populations, thus impairing the global encoding of stimuli. We recorded simultaneously from up to 50,000 neurons in mouse primary visual cortex (V1) and in higher-order visual areas and measured stimulus discrimination thresholds of 0.35 degrees and 0.37 degrees respectively in an orientation decoding task. These neural thresholds were almost 100 times smaller than the behavioral discrimination thresholds reported in mice. This discrepancy could not be explained by stimulus properties or arousal states. Furthermore, the behavioral variability during a sensory discrimination task could not be explained by neural variability in primary visual cortex. Instead behavior-related neural activity arose dynamically across a network of non-sensory brain areas. These results imply that sensory perception in mice is limited by downstream decoders, not by neural noise in sensory representations.
Dynamical population coding during defensive behaviours in prefrontal circuits
Coping with threatening situations requires both identifying stimuli predicting danger and selecting adaptive behavioral responses in order to survive. The dorso medial prefrontal cortex (dmPFC) is a critical structure involved in the regulation of threat-related behaviour, yet it is still largely unclear how threat-predicting stimuli and defensive behaviours are associated within prefrontal networks in order to successfully drive adaptive responses. To address these questions, we used a combination of extracellular recordings, neuronal decoding approaches, and optogenetic manipulations to show that threat representations and the initiation of avoidance behaviour are dynamically encoded in the overall population activity of dmPFC neurons. These data indicate that although dmPFC population activity at stimulus onset encodes sustained threat representations and discriminates threat- from non-threat cues, it does not predict action outcome. In contrast, transient dmPFC population activity prior to action initiation reliably predicts avoided from non-avoided trials. Accordingly, optogenetic inhibition of prefrontal activity critically constrained the selection of adaptive defensive responses in a time-dependent manner. These results reveal that the adaptive selection of active fear responses relies on a dynamic process of information linking threats with defensive actions unfolding within prefrontal networks.
Towards a speech neuroprosthesis
I will review advances in understanding the cortical encoding of speech-related oral movements. These discoveries are being translated to develop algorithms to decode speech from population neural activity.
A new computational framework for understanding vision in our brain
Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which creates a bottom-up saliency map to guide the fovea to selected visual locations via gaze shifts. This motivates a new framework that views vision as consisting of encoding, selection, and decoding stages, placing selection on center stage. It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better decoding (recognition), through analysis-by- synthesis, should query for additional information and be mainly directed at the foveal region. Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions.
High precision coding in visual cortex
Single neurons in visual cortex provide unreliable measurements of visual features due to their high trial-to-trial variability. It is not known if this “noise” extends its effects over large neural populations to impair the global encoding of stimuli. We recorded simultaneously from ∼20,000 neurons in mouse primary visual cortex (V1) and found that the neural populations had discrimination thresholds of ∼0.34° in an orientation decoding task. These thresholds were nearly 100 times smaller than those reported behaviourally in mice. The discrepancy between neural and behavioural discrimination could not be explained by the types of stimuli we used, by behavioural states or by the sequential nature of perceptual learning tasks. Furthermore, higher-order visual areas lateral to V1 could be decoded equally well. These results imply that the limits of sensory perception in mice are not set by neural noise in sensory cortex, but by the limitations of downstream decoders.
Decoding of Chemical Information from Populations of Olfactory Neurons
Information is represented in the brain by the coordinated activity of populations of neurons. Recent large-scale neural recording methods in combination with machine learning algorithms are helping understand how sensory processing and cognition emerge from neural population activity. This talk will explore the most popular machine learning methods used to gather meaningful low-dimensional representations from higher-dimensional neural recordings. To illustrate the potential of these approaches, Pedro will present his research in which chemical information is decoded from the olfactory system of the mouse for technological applications. Pedro and co-researchers have successfully extracted odor identity and concentration from olfactory receptor neuron low-dimensional activity trajectories. They have further developed a novel method to identify a shared latent space that allowed decoding of odor information across animals.
Memory Decoding Journal Club: "Binary and analog variation of synapses between cortical pyramidal neurons
Binary and analog variation of synapses between cortical pyramidal neurons
Memory Decoding Journal Club: Systems consolidation reorganizes hippocampal engram circuitry
Systems consolidation reorganizes hippocampal engram circuitry
Decoding Upper Limb Movements
Bernstein Conference 2024
Efficient cortical spike train decoding for brain-machine interface implants with recurrent spiking neural networks
Bernstein Conference 2024
Equal contribution of place cells and non-place cells to the position decoding from one-photon imaging calcium transients
Bernstein Conference 2024
Neural Decoding of Temporal Features of Zebra Finch Song
Bernstein Conference 2024
Semantic Embodiment: Decoding Action Words through Topographic Neuronal Representation with Brain-Constrained Network
Bernstein Conference 2024
Synaptic diversity naturally arises from neural decoding of heterogeneous populations
COSYNE 2022
Synaptic diversity naturally arises from neural decoding of heterogeneous populations
COSYNE 2022
Ctrl-TNDM: Decoding feedback-driven movement corrections from motor cortex neurons
COSYNE 2023
Decoding stress susceptibility from activity in amygdala-ventral hippocampal network
COSYNE 2023
Decoding momentary gain variability from neuronal populations
COSYNE 2023
Density-based Neural Decoding using Spike Localization for Neuropixels Recordings
COSYNE 2023
Population encoding and decoding of frontal cortex during natural communication in marmosets
COSYNE 2023
Reliable neural manifold decoding using low-distortion alignment of tangent spaces
COSYNE 2023
Sequence decoding with millisecond precision in the early olfactory system
COSYNE 2023
Switching state-space models enable decoding of replay across multiple spatial environments
COSYNE 2023
Auditory cortical manifold for natural soundscapes enables neurally aligned category decoding
COSYNE 2025
A computational framework for decoding active sensing
COSYNE 2025
Decoding Temporal Features of Birdsong Through Neural Activity Analysis
COSYNE 2025
Decoding Object Depth from the Macaque IT Cortex: Temporal Dynamics and Insights for ANN Models
COSYNE 2025
Decoding activity patterns across pyramidal cell dendritic trees during spontaneous behaviors using 3D arboreal scanning
FENS Forum 2024
Decoding cocaine-induced proteomic adaptations in the mouse nucleus accumbens
FENS Forum 2024
Decoding the developmental vulnerability to psychiatric disorders: Investigating the sexual dimorphism and role of perineuronal nets in habenulo-interpeduncular-system-mediated susceptibility to anxiety
FENS Forum 2024
Decoding envelope and frequency-following responses to speech using deep neural networks
FENS Forum 2024
Decoding of fMRI resting-state using task-based MVPA supports the Incentive-Sensitization Theory in smokers
FENS Forum 2024
Decoding neuronal identity maintenance and progenitor plasticity in extended brain organoid cultures
FENS Forum 2024
Decoding retinitis pigmentosa: Unveiling PRPF31 mutation effects on human iPSC-derived retinal organoids in vitro models
FENS Forum 2024
Decoding of selective attention to speech in CI patients using linear and non-linear methods
FENS Forum 2024
Decoding sleep patterns: Unraveling temazepam impact through BENDR encoder and its latent space analysis
FENS Forum 2024
Decoding spatiotemporal processing of speech and melody in the brain
FENS Forum 2024
Decoding transcriptional regulation in response to sunlight in vertebrates: Circadian clocks and beyond
FENS Forum 2024
Decoding visual processing in pigeon pallium
FENS Forum 2024
AI exploration of fibromyalgia: Decoding molecular complexity for targeted therapies
FENS Forum 2024
The hypothermia puzzle: Decoding its molecular effects
FENS Forum 2024
Real-time imaging of dopamine release and neuronal population dynamics in the motor cortex of awake mice – decoding of reward-related signals and movement parameters
FENS Forum 2024
Decoding behaviour from neural data using LSTM networks
Neuromatch 5
Intention decoding from PPC
Neuromatch 5