← Back

Data Analysis

Topic spotlight
TopicNeuro

data analysis

Discover seminars, jobs, and research tagged with data analysis across Neuro.
33 curated items26 Seminars7 Positions
Updated 2 days ago
33 items · data analysis

Latest

33 results
PositionNeuroscience

Dr. Demian Battaglia/Dr. Romain Goutagny

University of Strasbourg, Functional System's Dynamics team – FunSy
University of Strasbourg, France
Dec 21, 2025

The postdoc position is under the joint co-mentoring of Dr. Demian Battaglia and Dr. Romain Goutagny at the University of Strasbourg, France, in the Functional System's Dynamics team – FunSy. The position starts as soon as possible and can last up to two years. The job offer is funded by the French ANR 'HippoComp' project, which focuses on the complexity of hippocampal oscillations and the hypothesis that such complexity can serve as a computational resource. The team performs electrophysiological recordings in the hippocampus and cortex during spatial navigation and memory tasks in mice (wild type and mutant developing various neuropathologies) and have access to vast data through local and international cooperation. They use a large spectrum of computational tools ranging from time-series and network analyses, information theory, and machine-learning to multi-scale computational modeling.

PositionNeuroscience

Rune W. Berg

University of Copenhagen
Department of Neuroscience, University of Copenhagen, Denmark
Dec 21, 2025

The lab of Rune W. Berg is looking for a highly motivated and dynamic researcher for a 3-year position to start January 1st, 2024. The topic is the neuroscience of motor control with a focus on locomotion and spinal circuitry and connections with the brain. The person will be performing the following: 1) experimental recording of neurons in the brain and spinal cord of awake behaving rats using Neuropixels and Neuronexus electrodes combined with optogenetics. 2) Analyze the large amount of data generated from these experiments, including tissue processing. 3) Participate in the development of the new theory of motor control.

PositionNeuroscience

Rune W. Berg

University of Copenhagen
Department of Neuroscience, University of Copenhagen, Denmark
Dec 21, 2025

The lab of Rune W. Berg is looking for a highly motivated and dynamic researcher for a 3-year position to start January 1st, 2024. The topic is the neuroscience of motor control with a focus on locomotion and spinal circuitry and connections with the brain. The person will be performing the following: 1) experimental recording of neurons in the brain and spinal cord of awake behaving rats using Neuropixels and Neuronexus electrodes combined with optogenetics. 2) Analyze the large amount of data generated from these experiments, including tissue processing. 3) Participate in the development of the new theory of motor control.

PositionNeuroscience

Maximilian Riesenhuber, PhD

Georgetown University
Georgetown University Medical Center, Research Building Room WP-12, 3970 Reservoir Rd., NW, Washington, DC 20007
Dec 21, 2025

We have an opening for a postdoc position investigating the neural bases of deep multimodal learning in the brain. The project involves EEG and laminar 7T imaging (in collaboration with Dr. Peter Bandettini’s lab at NIMH) to test computational hypotheses for how the brain learns multimodal concept representations. Responsibilities of the postdoc include running EEG and fMRI experiments, data analysis and manuscript preparation. Georgetown University has a vibrant neuroscience community with over fifty labs participating in the Interdisciplinary Program in Neuroscience and a number of relevant research centers, including the new Center for Neuroengineering (cne.georgetown.edu). Interested candidates should submit a CV, a brief (1 page) statement of research interests, representative reprints, and the names and contact information of three references to Interfolio via https://apply.interfolio.com/148520. Faxed, emailed, or mailed applications will not be accepted. Questions about the position can be directed to Maximilian Riesenhuber (mr287@georgetown.edu).

PositionNeuroscience

N/A

University of Chicago
Chicago
Dec 21, 2025

The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience. We especially welcome applicants who develop mathematical approaches, computational models, and machine learning methods to study the brain at the circuits, systems, or cognitive levels. The current faculty members of the Grossman Center to work with are: Brent Doiron’s lab investigates how the cellular and synaptic circuitry of neuronal circuits supports the complex dynamics and computations that are routinely observed in the brain. Jorge Jaramillo’s lab investigates how subcortical structures interact with cortical circuits to subserve cognitive processes such as memory, attention, and decision making. Ramon Nogueira’s lab investigates the geometry of representations as the computational support of cognitive processes like abstraction in noisy artificial and biological neural networks. Marcella Noorman’s lab investigates how properties of synapses, neurons, and circuits shape the neural dynamics that enable flexible and efficient computation. Samuel Muscinelli’s lab studies how the anatomy of brain circuits both governs learning and adapts to it. We combine analytical theory, machine learning, and data analysis, in close collaboration with experimentalists. Appointees will have access to state-of-the-art facilities and multiple opportunities for collaboration with exceptional experimental labs within the Neuroscience Institute, as well as other labs from the departments of Physics, Computer Sciences, and Statistics. The Grossman Center offers competitive postdoctoral salaries in the vibrant and international city of Chicago, and a rich intellectual environment that includes the Argonne National Laboratory and UChicago’s Data Science Institute. The Neuroscience Institute is currently engaged in a major expansion that includes the incorporation of several new faculty members in the next few years.

SeminarNeuroscience

Neurobiological constraints on learning: bug or feature?

Cian O’Donell
Ulster University
Jun 11, 2025

Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 29, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscienceRecording

State-of-the-Art Spike Sorting with SpikeInterface

Samuel Garcia and Alessio Buccino
CRNS, Lyon, France and Allen Institute for Neural Dynamics, Seattle, USA
Nov 7, 2023

This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.

SeminarNeuroscience

1.8 billion regressions to predict fMRI (journal club)

Mihir Tripathy
Jul 28, 2023

Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.

SeminarNeuroscience

Analyzing artificial neural networks to understand the brain

Grace Lindsay
NYU
Dec 16, 2022

In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.

SeminarNeuroscience

Maths, AI and Neuroscience Meeting Stockholm

Roshan Cools, Alain Destexhe, Upi Bhalla, Vijay Balasubramnian, Dinos Meletis, Richard Naud
Dec 15, 2022

To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.

SeminarNeuroscience

Experimental Neuroscience Bootcamp

Adam Kampff
Voight Kampff, London, UK
Dec 5, 2022

This course provides a fundamental foundation in the modern techniques of experimental neuroscience. It introduces the essentials of sensors, motor control, microcontrollers, programming, data analysis, and machine learning by guiding students through the “hands on” construction of an increasingly capable robot. In parallel, related concepts in neuroscience are introduced as nature’s solution to the challenges students encounter while designing and building their own intelligent system.

SeminarNeuroscience

Modern Approaches to Behavioural Analysis

Alexander Mathis
EPFL, Switzerland
Nov 21, 2022

The goal of neuroscience is to understand how the nervous system controls behaviour, not only in the simplified environments of the lab, but also in the natural environments for which nervous systems evolved. In pursuing this goal, neuroscience research is supported by an ever-larger toolbox, ranging from optogenetics to connectomics. However, often these tools are coupled with reductionist approaches for linking nervous systems and behaviour. This course will introduce advanced techniques for measuring and analysing behaviour, as well as three fundamental principles as necessary to understanding biological behaviour: (1) morphology and environment; (2) action-perception closed loops and purpose; and (3) individuality and historical contingencies [1]. [1] Gomez-Marin, A., & Ghazanfar, A. A. (2019). The life of behavior. Neuron, 104(1), 25-36

SeminarNeuroscienceRecording

Parametric control of flexible timing through low-dimensional neural manifolds

Manuel Beiran
Center for Theoretical Neuroscience, Columbia University & Rajan lab, Icahn School of Medicine at Mount Sinai
Mar 9, 2022

Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.

SeminarNeuroscience

Maths, AI and Neuroscience meeting

Tim Vogels, Mickey London, Anita Disney, Yonina Eldar, Partha Mitra, Yi Ma
Dec 13, 2021

To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.

SeminarNeuroscienceRecording

Neural Population Dynamics for Skilled Motor Control

Britton Sauerbrei
Case Western Reserve University School of Medicine
Nov 5, 2021

The ability to reach, grasp, and manipulate objects is a remarkable expression of motor skill, and the loss of this ability in injury, stroke, or disease can be devastating. These behaviors are controlled by the coordinated activity of tens of millions of neurons distributed across many CNS regions, including the primary motor cortex. While many studies have characterized the activity of single cortical neurons during reaching, the principles governing the dynamics of large, distributed neural populations remain largely unknown. Recent work in primates has suggested that during the execution of reaching, motor cortex may autonomously generate the neural pattern controlling the movement, much like the spinal central pattern generator for locomotion. In this seminar, I will describe recent work that tests this hypothesis using large-scale neural recording, high-resolution behavioral measurements, dynamical systems approaches to data analysis, and optogenetic perturbations in mice. We find, by contrast, that motor cortex requires strong, continuous, and time-varying thalamic input to generate the neural pattern driving reaching. In a second line of work, we demonstrate that the cortico-cerebellar loop is not critical for driving the arm towards the target, but instead fine-tunes movement parameters to enable precise and accurate behavior. Finally, I will describe my future plans to apply these experimental and analytical approaches to the adaptive control of locomotion in complex environments.

SeminarNeuroscienceRecording

Space wrapped onto a grid cell torus

Erik Hermansen
Dunn lab, NTNU
Nov 3, 2021

Entorhinal grid cells, so-called because of their hexagonally tiled spatial receptive fields, are organized in modules which, collectively, are believed to form a population code for the animal’s position. Here, we apply topological data analysis to simultaneous recordings of hundreds of grid cells and show that joint activity of grid cells within a module lies on a toroidal manifold. Each position of the animal in its physical environment corresponds to a single location on the torus, and each grid cell is preferentially active within a single “field” on the torus. Toroidal firing positions persist between environments, and between wakefulness and sleep, in agreement with continuous attractor models of grid cells.

SeminarNeuroscienceRecording

Learning the structure and investigating the geometry of complex networks

Robert Peach and Alexis Arnaudon
Imperial College
Sep 25, 2021

Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.

SeminarNeuroscience

Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Neural Dynamics & Computation Lab, Stanford University
Jun 17, 2021

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscience

From genetics to neurobiology through transcriptomic data analysis

Ahmed Mahfouz
Leiden University Medical Center (LUMC)
May 6, 2021

Over the past years, genetic studies have uncovered hundreds of genetic variants to be associated with complex brain disorders. While this really represents a big step forward in understanding the genetic etiology of brain disorders, the functional interpretation of these variants remains challenging. We aim to help with the functional characterization of variants through transcriptomic data analysis. For instance, we rely on brain transcriptome atlases, such as Allen Brain Atlases, to infer functional relations between genes. One example of this is the identification of signaling mechanisms of steroid receptors. Further, by integrating brain transcriptome atlases with neuropathology and neuroimaging data, we identify key genes and pathways associated with brain disorders (e.g. Parkinson's disease). With technological advances, we can now profile gene expression in single-cells at large scale. These developments have presented significant computational developments. Our lab focuses on developing scalable methods to identify cells in single-cell data through interactive visualization, scalable clustering, classification, and interpretable trajectory modelling. We also work on methods to integrate single-cell data across studies and technologies.

SeminarNeuroscienceRecording

Mice alternate between discrete strategies during perceptual decision-making

Zoe Ashwood
Pillow lab, Princeton University
Feb 10, 2021

Classical models of perceptual decision-making assume that animals use a single, consistent strategy to integrate sensory evidence and form decisions during an experiment. In this talk, I aim to convince you that this common view is incorrect. I will show results from applying a latent variable framework, the “GLM-HMM”, to hundreds of thousands of trials of mouse choice data. Our analysis reveals that mice don’t lapse. Instead, mice switch back and forth between engaged and disengaged behavior within a single session, and each mode of behavior lasts tens to hundreds of trials.

SeminarNeuroscience

Panel discussion: Practical advice for reproducibility in neuroscience

Dorothy Bishop, Verena Heise, Russ Poldrack, and Guillaume Rousselet
University of Oxford, Stanford University, University of Glasgow
Nov 10, 2020

This virtual, interactive panel on reproducibility in neuroscience will focus on practical advice that researchers at all career stages could implement to improve the reproducibility of their work, from power analyses and pre-registering reports to selecting statistical tests and data sharing. The event will comprise introductions of our speakers and how they came to be advocates for reproducibility in science, followed by a 25-minute discussion on reproducibility, including practical advice for researchers on how to improve their data collection, analysis, and reporting, and then 25 minutes of audience Q&A. In total, the event will last one hour and 15 minutes. Afterwards, some of the speakers will join us for an informal chat and Q&A reserved only for students/postdocs.

SeminarNeuroscience

Biomedical Image and Genetic Data Analysis with machine learning; applications in neurology and oncology

Wiro Niessen
Erasmus MC
Nov 9, 2020

In this presentation I will show the opportunities and challenges of big data analytics with AI techniques in medical imaging, also in combination with genetic and clinical data. Both conventional machine learning techniques, such as radiomics for tumor characterization, and deep learning techniques for studying brain ageing and prognosis in dementia, will be addressed. Also the concept of deep imaging, a full integration of medical imaging and machine learning, will be discussed. Finally, I will address the challenges of how to successfully integrate these technologies in daily clinical workflow.

SeminarNeuroscienceRecording

Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning

Surya Ganguli
Stanford University
Oct 16, 2020

Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition.  However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling.  We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process.  In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.

SeminarNeuroscienceRecording

Machine learning methods applied to dMRI tractography for the study of brain connectivity

Pamela Guevara
Department of Electrical Engineering, Faculty of Engineering, Universidad de Concepción, Chile
Aug 19, 2020

Tractography datasets, calculated from dMRI, represent the main WM structural connections in the brain. Thanks to advances in image acquisition and processing, the complexity and size of these datasets have constantly increased, also containing a large amount of artifacts. We present some examples of algorithms, most of them based on classical machine learning approaches, to analyze these data and identify common connectivity patterns among subjects.

SeminarNeuroscienceRecording

African Neuroscience: Current Status and Prospects

Mahmoud Bukar Maina
University of Sussex
Jul 17, 2020

Understanding the function and dysfunction of the brain remains one of the key challenges of our time. However, an overwhelming majority of brain research is carried out in the Global North, by a minority of well-funded and intimately interconnected labs. In contrast, with an estimated one neuroscientist per million people in Africa, news about neuroscience research from the Global South remains sparse. Clearly, devising new policies to boost Africa’s neuroscience landscape is imperative. However, the policy must be based on accurate data, which is largely lacking. Such data must reflect the extreme heterogeneity of research outputs across the continent’s 54 countries. We have analysed all of Africa’s Neuroscience output over the past 21 years and uniquely verified the work performed in African laboratories. Our unique dataset allows us to gain accurate and in-depth information on the current state of African Neuroscience research, and to put it into a global context. The key findings from this work and recommendations on how African research might best be supported in the future will be discussed.

data analysis coverage

33 items

Seminar26
Position7
Domain spotlight

Explore how data analysis research is advancing inside Neuro.

Visit domain