← Back

Neural Data Analysis

Topic spotlight
TopicWorld Wide

neural data analysis

Discover seminars, jobs, and research tagged with neural data analysis across World Wide.
10 curated items4 Positions4 Seminars2 ePosters
Updated 2 days ago
10 items · neural data analysis
10 results
Position

Gatsby Computational Neuroscience Unit

Gatsby Computational Neuroscience Unit, UCL
London, UK
Dec 5, 2025

4-Year PhD Programme in Theoretical Neuroscience and Machine Learning Call for Applications! Deadline: 13 November 2022 The Gatsby Computational Neuroscience Unit is a leading research centre focused on theoretical neuroscience and machine learning. We study (un)supervised and reinforcement learning; inference, coding and neural dynamics; Bayesian and kernel methods; deep learning; with applications to the analysis of perceptual processing and cognition, neural data, signal and image processing, machine vision, network data and nonparametric hypothesis testing. The unit provides a unique opportunity for a critical mass of theoreticians to interact closely with one another and with researchers at the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC), the Centre for Computational Statistics and Machine Learning (CSML) and related UCL departments such as Computer Science; Statistical Science; Artificial Intelligence; the ELLIS Unit at UCL; Neuroscience; and the nearby Alan Turing and Francis Crick Institutes. Our PhD programme provides a rigorous preparation for a research career. Students complete a 4-year PhD in either machine learning or theoretical and computational neuroscience, with minor emphasis in the complementary field. Courses in the first year provide a comprehensive introduction to both fields and systems neuroscience. Students are encouraged to work and interact closely with SWC/CSML researchers to take advantage of this uniquely multidisciplinary research environment. Full funding is available regardless of nationality. The unit also welcomes applicants who have secured or are seeking funding from other sources. To apply, please visit www.ucl.ac.uk/gatsby/study-and-work/phd-programme

PositionComputational Neuroscience

Sam Neymotin

Nathan Kline Institute (NKI) for Psychiatric Research
N/A
Dec 5, 2025

Postdoctoral scientist positions are available at the Nathan Kline Institute (NKI) for Psychiatric Research to work on computational neuroscience research funded by NIH and DoD grants. Applicants should have a PhD in computational neuroscience (or a related field), strong background in multiscale modeling using NEURON/NetPyNE, Python software development, neural/electrophysiology data analysis, machine learning, and writing/presenting research.

PositionNeuroscience

Joaquin

Gatsby Unit, the Sainsbury Wellcome Centre for Neural Circuits and Behaviour (SWC) and NeuroGEARS Ltd
London, UK
Dec 5, 2025

We invite applications for a Research Software Engineer (RSE) position with expertise in software development, machine learning, neural data analysis, and experimental control (ideally with the Bonsai ecosystem) to contribute to the recently funded project 'Machine Learning for Neuroscience Experimental Control'. You will contribute to the neuroscience community with advanced machine learning software for experimental control. You will be embedded in the unparalleled research environment of the Gatsby Unit, the SWC and NeuroGEARS, with opportunities to connect with top researchers and engineers.

Position

Cian O’Donnell

Ulster University, Intelligent Systems Research Centre, CNET team
Derry campus of Ulster University, Northern Ireland, UK
Dec 5, 2025

We are looking for a computational neuroscience PhD student for a project on “NeuroAI approaches to understanding inter-individual differences in cognition and psychiatric disorders.” The goal is to use populations of deep neural networks as a simple model for populations of human brains, combined with models from evolutionary genetics, to understand the principles underlying the mapping from genotypes to cognitive phenotypes.

SeminarNeuroscience

Analyzing artificial neural networks to understand the brain

Grace Lindsay
NYU
Dec 15, 2022

In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.

SeminarNeuroscienceRecording

Parametric control of flexible timing through low-dimensional neural manifolds

Manuel Beiran
Center for Theoretical Neuroscience, Columbia University & Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 8, 2022

Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.

ePoster

Neuroformer: A Transformer Framework for Multimodal Neural Data Analysis

Antonis Antoniades, Yiyi Yu, Spencer LaVere Smith

COSYNE 2023

ePoster

Meta-Dynamical State Space Models for Integrative Neural Data Analysis

Ayesha Vermani, Josue Nassar, Hyungju Jeon, Matthew Dowling, Il Memming Park

COSYNE 2025