← Back

Neural Data

Topic spotlight
TopicWorld Wide

neural data

Discover seminars, jobs, and research tagged with neural data across World Wide.
41 curated items24 Seminars13 ePosters4 Positions
Updated 2 days ago
41 items · neural data
41 results
Position

I-Chun Lin, PhD

Gatsby Computational Neuroscience Unit, UCL
Gatsby Computational Neuroscience Unit, UCL
Dec 5, 2025

The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning in brains and machines; inference, coding and neural dynamics; Bayesian and kernel methods, and deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The Unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical/computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment.

Position

Ahmed El Hady

University of Konstanz and Max Planck Institute of Animal Behavior
Konstanz, Germany
Dec 5, 2025

We are seeking a PostDoc with a quantitative background who has finished (or about to finish) a doctoral degree in a quantitative field preferably but not limited to physics or engineering. The candidate should show enthusiasm for analysing large scale data sets that include but not limited to: behavioural, neural and physiological data. Experience with machine learning techniques and animal tracking software programs is preferred but not required. The researcher will be based in the integrative biophysics group at the University of Konstanz and Max Planck Institute of Animal Behavior, located in Konstanz, Germany. The Postdoc will be working as part of a recently funded Human Sciences Frontiers Program (HSFP) research grant ‘”Neurometabolic mechanisms underlying social foraging” in collaboration with the experimental groups of Robert Froemke (New York University) and Jee Hyun Choi (Korean Institute of Science and Technology). The project aims to understand neuro-metabolic mechanisms underlying social foraging. The PostDoc will have the opportunity to travel to the experimental collaborators in New York and Seoul. The Integrative Biophysics group at the CASCB led by Dr. Ahmed El Hady is focused on theoretical and computational understanding of mechanisms underlying foraging. The postdoc position will be embedded within the highly collaborative environment of the cluster for advanced study of collective behavior at the University of Konstanz.

Position

I-Chun Lin

Gatsby Computational Neuroscience Unit, UCL
Gatsby Computational Neuroscience Unit, UCL
Dec 5, 2025

The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning in brains and machines; inference, coding and neural dynamics; Bayesian and kernel methods, and deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The Unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical/computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment.

SeminarOpen SourceRecording

Towards open meta-research in neuroimaging

Kendra Oudyk
ORIGAMI - Neural data science - https://neurodatascience.github.io/
Dec 8, 2024

When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/

SeminarNeuroscience

Extracting computational mechanisms from neural data using low-rank RNNs

Adrian Valente
Ecole Normale Supérieure
Jan 10, 2023

An influential theory in systems neuroscience suggests that brain function can be understood through low-dimensional dynamics [Vyas et al 2020]. However, a challenge in this framework is that a single computational task may involve a range of dynamic processes. To understand which processes are at play in the brain, it is important to use data on neural activity to constrain models. In this study, we present a method for extracting low-dimensional dynamics from data using low-rank recurrent neural networks (lrRNNs), a highly expressive and understandable type of model [Mastrogiuseppe & Ostojic 2018, Dubreuil, Valente et al. 2022]. We first test our approach using synthetic data created from full-rank RNNs that have been trained on various brain tasks. We find that lrRNNs fitted to neural activity allow us to identify the collective computational processes and make new predictions for inactivations in the original RNNs. We then apply our method to data recorded from the prefrontal cortex of primates during a context-dependent decision-making task. Our approach enables us to assign computational roles to the different latent variables and provides a mechanistic model of the recorded dynamics, which can be used to perform in silico experiments like inactivations and provide testable predictions.

SeminarNeuroscience

Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties

SueYeon Chung
NYU/Flatiron
Sep 15, 2022

A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.

SeminarNeuroscienceRecording

Parametric control of flexible timing through low-dimensional neural manifolds

Manuel Beiran
Center for Theoretical Neuroscience, Columbia University & Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 8, 2022

Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.

SeminarNeuroscience

Neural circuits for novel choices and for choice speed and accuracy changes in macaques

Alessandro Bongioanni
University of Oxford
Feb 3, 2022

While most experimental tasks aim at isolating simple cognitive processes to study their neural bases, naturalistic behaviour is often complex and multidimensional. I will present two studies revealing previously uncharacterised neural circuits for decision-making in macaques. This was possible thanks to innovative experimental tasks eliciting sophisticated behaviour, bridging the human and non-human primate research traditions. Firstly, I will describe a specialised medial frontal circuit for novel choice in macaques. Traditionally, monkeys receive extensive training before neural data can be acquired, while a hallmark of human cognition is the ability to act in novel situations. I will show how this medial frontal circuit can combine the values of multiple attributes for each available novel item on-the-fly to enable efficient novel choices. This integration process is associated with a hexagonal symmetry pattern in the BOLD response, consistent with a grid-like representation of the space of all available options. We prove the causal role played by this circuit by showing that focussed transcranial ultrasound neuromodulation impairs optimal choice based on attribute integration and forces the subjects to default to a simpler heuristic decision strategy. Secondly, I will present an ongoing project addressing the neural mechanisms driving behaviour shifts during an evidence accumulation task that requires subjects to trade speed for accuracy. While perceptual decision-making in general has been thoroughly studied, both cognitively and neurally, the reasons why speed and/or accuracy are adjusted, and the associated neural mechanisms, have received little attention. We describe two orthogonal dimensions in which behaviour can vary (traditional speed-accuracy trade-off and efficiency) and we uncover independent neural circuits concerned with changes in strategy and fluctuations in the engagement level. The former involves the frontopolar cortex, while the latter is associated with the insula and a network of subcortical structures including the habenula.

SeminarNeuroscience

The processing of price during purchase decision making: Are there neural differences among prosocial and non-prosocial consumers?

Anna Shepelenko
HSE University
Dec 8, 2021

International organizations, governments and companies are increasingly committed to developing measures that encourage adoption of sustainable consumption patterns among the population. However, their success requires a deep understanding of the everyday purchasing decision process and the elements that shape it. Price is an element that stands out. Prior research concluded that the influence of price on purchase decisions varies across consumer profiles. Yet no consumer behavior study to date has assessed the differences of price processing among consumers adopting sustainable habits (prosocial) as opposed to those who have not (non-prosocial). This is the first study to resort to neuroimaging tools to explore the underlying neural mechanisms that reveal the effect of price on prosocial and non-prosocial consumers. Self-reported findings indicate that prosocial consumers place greater value on collective costs and benefits while non-prosocial consumers place a greater weight on price. The neural data gleaned from this analysis offers certain explanations as to the origin of the differences. Non-prosocial (vs. prosocial) consumers, in fact, exhibit a greater activation in brain areas involved with reward, valuation and choice when evaluating price information. These findings could steer managers to improve market segmentation and assist institutions in their design of campaigns fostering environmentally sustainable behaviors

SeminarNeuroscience

When and (maybe) why do high-dimensional neural networks produce low-dimensional dynamics?

Eric Shea-Brown
Department of Applied Mathematics, University of Washington
Nov 17, 2021

There is an avalanche of new data on activity in neural networks and the biological brain, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect other desiderata for neural codes. For what networks does each case occur? We begin by exploring bottom-up mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. We then cover “top-down” ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform fundamental computational tasks.

SeminarNeuroscienceRecording

Rastermap: Extracting structure from high dimensional neural data

Carsen Stringer
HHMI, Janelia Research Campus
Oct 26, 2021

Large-scale neural recordings contain high-dimensional structure that cannot be easily captured by existing data visualization methods. We therefore developed an embedding algorithm called Rastermap, which captures highly nonlinear relationships between neurons, and provides useful visualizations by assigning each neuron to a location in the embedding space. Compared to standard algorithms such as t-SNE and UMAP, Rastermap finds finer and higher dimensional patterns of neural variability, as measured by quantitative benchmarks. We applied Rastermap to a variety of datasets, including spontaneous neural activity, neural activity during a virtual reality task, widefield neural imaging data during a 2AFC task, artificial neural activity from an agent playing atari games, and neural responses to visual textures. We found within these datasets unique subpopulations of neurons encoding abstract properties of the environment.

SeminarNeuroscience

The processing of price during purchase decision making: Are there neural differences among prosocial and non-prosocial consumers?

Anna Shepelenko
HSE University
Oct 13, 2021

International organizations, governments and companies are increasingly committed to developing measures that encourage adoption of sustainable consumption patterns among the population. However, their success requires a deep understanding of the everyday purchasing decision process and the elements that shape it. Price is an element that stands out. Prior research concluded that the influence of price on purchase decisions varies across consumer profiles. Yet no consumer behavior study to date has assessed the differences of price processing among consumers adopting sustainable habits (prosocial) as opposed to those who have not (non-prosocial). This is the first study to resort to neuroimaging tools to explore the underlying neural mechanisms that reveal the effect of price on prosocial and non-prosocial consumers. Self-reported findings indicate that prosocial consumers place greater value on collective costs and benefits while non-prosocial consumers place a greater weight on price. The neural data gleaned from this analysis offers certain explanations as to the origin of the differences. Non-prosocial (vs. prosocial) consumers, in fact, exhibit a greater activation in brain areas involved with reward, valuation and choice when evaluating price information. These findings could steer managers to improve market segmentation and assist institutions in their design of campaigns fostering environmentally sustainable behaviors

SeminarNeuroscienceRecording

Strong and weak principles of neural dimension reduction

Mark Humphries
School of Psychology, University of Nottingham
Sep 9, 2021

Large-scale, single neuron resolution recordings are inherently high-dimensional, with as many dimensions as neurons. To make sense of them, for many the answer is: reduce the number of dimensions. In this talk I argue we can distinguish weak and strong principles of neural dimension reduction. The weak principle is that dimension reduction is a convenient tool for making sense of complex neural data. The strong principle is that dimension reduction moves us closer to how neural circuits actually operate and compute. Elucidating these principles is crucial, for which we subscribe to provides radically different interpretations of the same dimension reduction techniques applied to the same data. I outline experimental evidence for each principle, but illustrate how we could make either the weak or strong principles appear to be true based on innocuous looking analysis decisions. These insights suggest arguments over low and high-dimensional neural activity need better constraints from both experiment and theory.

SeminarNeuroscience

Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Neural Dynamics & Computation Lab, Stanford University
Jun 16, 2021

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscience

Choosing, fast and slow: Implications of prioritized-sampling models for understanding automaticity and control

Cendri Hutcherson
University of Toronto
Apr 14, 2021

The idea that behavior results from a dynamic interplay between automatic and controlled processing underlies much of decision science, but has also generated considerable controversy. In this talk, I will highlight behavioral and neural data showing how recently-developed computational models of decision making can be used to shed new light on whether, when, and how decisions result from distinct processes operating at different timescales. Across diverse domains ranging from altruism to risky choice biases and self-regulation, our work suggests that a model of prioritized attentional sampling and evidence accumulation may provide an alternative explanation for many phenomena previously interpreted as supporting dual process models of choice. However, I also show how some features of the model might be taken as support for specific aspects of dual-process models, providing a way to reconcile conflicting accounts and generating new predictions and insights along the way.

SeminarNeuroscienceRecording

Untangling brain wide current flow using neural network models

Kanaka Rajan
Mount Sinai
Mar 11, 2021

Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this powerful framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

SeminarNeuroscienceRecording

Inferring brain-wide current flow using data-constrained neural network models

Kanaka Rajan
Icahn School of Medicine at Mount Sinai
Nov 17, 2020

Rajanlab designs neural network models constrained by experimental data, and reverse engineers them to figure out how brain circuits function in health and disease. Recently, we have been developing a powerful new theory-based framework for “in-vivo tract tracing” from multi-regional neural activity collected experimentally. We call this framework CURrent-Based Decomposition (CURBD). CURBD employs recurrent neural networks (RNNs) directly constrained, from the outset, by time series measurements acquired experimentally, such as Ca2+ imaging or electrophysiological data. Once trained, these data-constrained RNNs let us infer matrices quantifying the interactions between all pairs of modeled units. Such model-derived “directed interaction matrices” can then be used to separately compute excitatory and inhibitory input currents that drive a given neuron from all other neurons. Therefore different current sources can be de-mixed – either within the same region or from other regions, potentially brain-wide – which collectively give rise to the population dynamics observed experimentally. Source de-mixed currents obtained through CURBD allow an unprecedented view into multi-region mechanisms inaccessible from measurements alone. We have applied this method successfully to several types of neural data from our experimental collaborators, e.g., zebrafish (Deisseroth lab, Stanford), mice (Harvey lab, Harvard), monkeys (Rudebeck lab, Sinai), and humans (Rutishauser lab, Cedars Sinai), where we have discovered both directed interactions brain wide and inter-area currents during different types of behaviors. With this framework based on data-constrained multi-region RNNs and CURrent Based Decomposition (CURBD), we can ask if there are conserved multi-region mechanisms across different species, as well as identify key divergences.

SeminarNeuroscienceRecording

Dimensions of variability in circuit models of cortex

Brent Doiron
The University of Chicago
Nov 15, 2020

Cortical circuits receive multiple inputs from upstream populations with non-overlapping stimulus tuning preferences. Both the feedforward and recurrent architectures of the receiving cortical layer will reflect this diverse input tuning. We study how population-wide neuronal variability propagates through a hierarchical cortical network receiving multiple, independent, tuned inputs. We present new analysis of in vivo neural data from the primate visual system showing that the number of latent variables (dimension) needed to describe population shared variability is smaller in V4 populations compared to those of its downstream visual area PFC. We successfully reproduce this dimensionality expansion from our V4 to PFC neural data using a multi-layer spiking network with structured, feedforward projections and recurrent assemblies of multiple, tuned neuron populations. We show that tuning-structured connectivity generates attractor dynamics within the recurrent PFC current, where attractor competition is reflected in the high dimensional shared variability across the population. Indeed, restricting the dimensionality analysis to activity from one attractor state recovers the low-dimensional structure inherited from each of our tuned inputs. Our model thus introduces a framework where high-dimensional cortical variability is understood as ``time-sharing’’ between distinct low-dimensional, tuning-specific circuit dynamics.

SeminarNeuroscienceRecording

Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Stanford University
Oct 15, 2020

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition.  However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling.  We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process.  In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscienceRecording

Using noise to probe recurrent neural network structure and prune synapses

Rishidev Chaudhuri
University of California, Davis
Sep 24, 2020

Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.

ePoster

Inferring stochastic low-rank recurrent neural networks from neural data

Matthijs Pals, A Sağtekin, Felix Pei, Manuel Gloeckler, Jakob Macke

Bernstein Conference 2024

ePoster

Gaussian Partial Information Decomposition: Quantifying Inter-areal Interactions in High-Dimensional Neural Data

COSYNE 2022

ePoster

Gaussian Partial Information Decomposition: Quantifying Inter-areal Interactions in High-Dimensional Neural Data

COSYNE 2022

ePoster

A Method for Testing Bayesian Models Using Neural Data

Gabor Lengyel, Sabyasachi Shivkumar, Ralf Haefner

COSYNE 2023

ePoster

Neuroformer: A Transformer Framework for Multimodal Neural Data Analysis

Antonis Antoniades, Yiyi Yu, Spencer LaVere Smith

COSYNE 2023

ePoster

A simple method to improve regression and correlation coefficient estimates with noisy neural data

Jason E. Pina & Joel Zylberberg

COSYNE 2023

ePoster

Inferring stochastic low-rank recurrent neural networks from neural data

Matthijs Pals, A Erdem Sagtekin, Felix Pei, Manuel Gloeckler, Florian Mormann, Stefanie Liebe, Jakob Macke

COSYNE 2025

ePoster

Leveraging the dual nature of rows and columns in neural data

Erik Hermansen, Sigurd Gaukstad, Valdemar Olsen, Melvin Vaupel, Benjamin Dunn

COSYNE 2025

ePoster

Meta-Dynamical State Space Models for Integrative Neural Data Analysis

Ayesha Vermani, Josue Nassar, Hyungju Jeon, Matthew Dowling, Il Memming Park

COSYNE 2025

ePoster

Metamers and Mixtures: Testing Bayesian models using neural data

Ralf Haefner, Sabyasachi Shivkumar, Gabor Lengyel

COSYNE 2025

ePoster

Nonlinear Dynamical Modeling of Behavior and Multimodal Neural Data

Parastoo Azizeddin, Eray Erturk, Maryam Shanechi

COSYNE 2025

ePoster

BearMind: A pipeline for batch examination & analysis of raw miniscopic neural data

Vladimir Sotskov, Nikita Pospelov, Viktor Plusnin, Artem Kirsanov, Konstantin Anokhin

FENS Forum 2024

ePoster

Decoding behaviour from neural data using LSTM networks

Aswathi Thrivikraman

Neuromatch 5