Data Analysis
data analysis
University of Chicago - Grossman Center for Quantitative Biology and Human Behavior
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience.
Prof. Edmund Wascher
The core aspect of the position is the analysis of complex neurocognitive data (mainly EEG). The data comes from different settings and will also be analyzed using machine learning/AI. Scientific collaboration in a broad-based logitudinal study investigating the prerequisites for healthy lifelong working is wanted. The study is in its second wave of data collection. And represents a worldwide unique dataset. In addition to detailed standard psychological and physiological data, neurocognitive EEG data from more than 10 experimental settings and the entire genome of the subjects are available. For the analysis, machine learning will be used in addition to state-of-the-art EEG analysis methods. The position will be embedded in a group on computational neuroscience. Please find the full ad here: https://www.ifado.de/ifadoen/careers/current-job-offers/?noredirect=en_US#job1
Tansu Celikel
The School of Psychology (psychology.gatech.edu/) at the GEORGIA INSTITUTE OF TECHNOLOGY (www.gatech.edu/) invites nominations and applications for 5 open-rank tenure-track faculty positionswith an anticipated start date of August 2023 or later. The successful applicant will be expected to demonstrate and develop an exceptional research program. The research area is open, but we are particularly interested in candidates whose scholarship complements existing School strengths in Adult Development and Aging, Cognition and Brain Science, Engineering Psychology, Work and Organizational Psychology, and Quantitative Psychology, and takes advantage of quantitative, mathematical, and/or computational methods. The School of Psychology is well-positioned in the College of Sciences at Georgia Tech, a University that promotes translational research from the laboratory and field to real-world applications in a variety of areas. The School offers multidisciplinary educational programs, graduate training, and research opportunities in the study of mind, brain, and behavior and the associated development of technologies that can improve human experience. Excellent research facilities support the School’s research and interdisciplinary graduate programs across the Institute. Georgia Tech’s commitment to interdisciplinary collaboration has fostered fruitful interactions between psychology faculty and faculty in the sciences, computing, business, engineering, design, and liberal arts. Located in the heart of Atlanta, one of the nation's most academic, entrepreneurial, creative and diverse cities with excellent quality of life, the School actively develops and maintains a rich network of academic and applied behavioral science/industrial partnerships in and beyond Atlanta. Candidates whose research programs foster collaborative interactions with other members of the School and further contribute to bridge-building with other academic and research units at Tech and industries are particularly encouraged to apply. Applications can be submitted online (bit.ly/Join-us-at-GT-Psych) and should include a Cover Letter, Curriculum Vitae (including a list of publications), Research Statement, Teaching Statement, DEI (diversity, equity, and inclusion) statement, and contact information of at least three individuals who have agreed to provide a reference in support of the application if asked. Evaluation of applications will begin October 10th, 2022 and continue until all positions are filled. Questions about this search can be addressed to faculty_search@psych.gatech.edu. Portal questions will be answered by Tikica Platt, the School’s HR director, and questions about positions by the co-chairs of the search committee, Ruth Kanfer and Tansu Celikel.
Arvind Kumar
We are interested in understanding how the basal ganglia and the cerebellum interact during a sensori-motor task. To this end we use both experimental data (multiunit activity and behavior) and computational models. On one hand we will record multiunit neuronal activity in the cerebellum and basal ganglia while animals perform a motor task. On the other hand we will use computational models to understand how activity in one brain region affects the representation of task related activity in the other area. More info: https://www.kth.se/profile/arvindku/page/postdoctoral-researcher-position
Prof. Itzhak Fried, MD, PhD
The research involves the investigations of the neural mechanisms of memory and cognition in humans. We collect and analyze electrophysiological data including single neuron activity and local field potentials from human epilepsy patients during a variety of memory and cognitive tasks during the awake/sleep cycle, examine the relationships between neural signals and behavior, as well as the effects of electrical stimulation (applied in a closed/open-loop fashion) on neural signals and cognition.
Bei Xiao
The RA is to pursue research projects of his/her own as well as provide support for research carried out in the Xiao lab. Possible duties include: Building VR/AR experimental interfaces with Unity3D, Python coding for behavioral data analysis, Collecting data for psychophysical experiments, Training machine learning models.
Yashar Ahmadian
The postdoc will work on a collaborative project between the labs of Yashar Ahmadian at the Computational and Biological Learning Lab (CBL), and Zoe Kourtzi at the Psychology Department, both at the University of Cambridge. The project investigates the computational principles and circuit mechanisms underlying human visual perceptual learning, particularly the role of adaptive changes in the balance of cortical excitation and inhibition resulting from perceptual learning. The postdoc will be based in CBL, with free access to the Kourtzi lab in the Psychology department.
Dr. Ziad Nahas
Dr. Ziad Nahas (Interventional Psychiatry Lab) in the University of Minnesota Department of Psychiatry and Behavioral Sciences is seeking an outstanding candidate for a postdoctoral position to conduct and analyze the effects of neuromodulation on brain activity in mood disorders. Candidates should be passionate about advancing knowledge in the area of translational research of depressive disorders and other mental health conditions with a focus on invasive and non-invasive brain stimulation treatments. The position is available June 1, 2023, and funding is available for at least two years.
Christian Leibold
The lab of Christian Leibold invites applications of postdoc candidates on topics related to the geometry of neural manifolds. We will use spiking neural network simulations, analysis of massively parallel recordings, as well as techniques from differential geometry to understand the dynamics of the neural population code in the hippocampal formation in relation to complex cognitive behaviors. Our research group combines modelling of neural circuits with the development of machine learning techniques for data analysis. We strive for a diverse, interdisciplinary, and collaborative work environment.
Dr. Ziad Nahas
Dr. Ziad Nahas (Interventional Psychiatry Lab) in the University of Minnesota Department of Psychiatry and Behavioral Sciences is seeking an outstanding candidate for a postdoctoral position to conduct and analyze the effects of neuromodulation on brain activity in mood disorders. Candidates should be passionate about advancing knowledge in the area of translational research of depressive disorders and other mental health conditions with a focus on invasive and non-invasive brain stimulation treatments. The position is available June 1, 2023, and funding is available for at least two years.
Dr. Demian Battaglia/Dr. Romain Goutagny
The postdoc position is under the joint co-mentoring of Dr. Demian Battaglia and Dr. Romain Goutagny at the University of Strasbourg, France, in the Functional System's Dynamics team – FunSy. The position starts as soon as possible and can last up to two years. The job offer is funded by the French ANR 'HippoComp' project, which focuses on the complexity of hippocampal oscillations and the hypothesis that such complexity can serve as a computational resource. The team performs electrophysiological recordings in the hippocampus and cortex during spatial navigation and memory tasks in mice (wild type and mutant developing various neuropathologies) and have access to vast data through local and international cooperation. They use a large spectrum of computational tools ranging from time-series and network analyses, information theory, and machine-learning to multi-scale computational modeling.
Dr Andrej Bicanski
This project involves modelling the staggered development and the decline with age of spatial coding in the mammalian brain, as well as data analysis of single neuron recordings. The position is based at Newcastle University, UK, with a rotation in the lab of Prof. Colin Lever in Durham, UK. The project is fully funded for 4 years by the BBSRC. Both international and UK students can apply, and fees are covered.
Prof. Maxime Baud/Dr. Timothée Proix
A postdoc position is available under the shared supervision of Prof. Maxime Baud and Dr. Timothée Proix, who both specialize in quantitative neuroscience research. Together, they are running a three-year clinical trial involving patients with epilepsy who received a minimally invasive EEG device beneath the scalp for the chronic recording (months) of brain signals during wake and sleep. The postdoc will help with the analysis of massive amounts of EEG data, with a desire to build forecasting algorithms aiming at estimating the risk of seizures 24 hours in advance. The project lies at the interface between machine learning and EEG data analysis. The goal of the project is to develop machine learning algorithms to forecast seizures.
Francesco Piccialli
The main objective of the research activity will be the design and application of advanced Machine Learning and Deep Learning methodologies for data analysis in the context of a Smart City and Digital Society. The aim is to develop predictive models and intelligent systems capable of extracting meaningful information from the data collected within a Smart City, enabling optimized resource management, improving the quality of life for citizens, and promoting a more effective and sustainable digital society. The tasks include performing cutting edge research on Machine and Deep Learning methodologies for Smart City, working on the project, especially planning, implementing and executing research, conducting and participating in research projects such as lab and equipment set up, data collection, data analysis, participating in routine laboratory operations, such as planning and preparations for experiments, lab maintenance and lab procedures, and coordinating with the PI and other team members for strategies and project planning.
Rune W. Berg
The lab of Rune W. Berg is looking for a highly motivated and dynamic researcher for a 3-year position to start January 1st, 2024. The topic is the neuroscience of motor control with a focus on locomotion and spinal circuitry and connections with the brain. The person will be performing the following: 1) experimental recording of neurons in the brain and spinal cord of awake behaving rats using Neuropixels and Neuronexus electrodes combined with optogenetics. 2) Analyze the large amount of data generated from these experiments, including tissue processing. 3) Participate in the development of the new theory of motor control.
Mathieu Desroches
The aim of the project is to develop a multiscale model of Dravet syndrome, from ionic channels of interacting neurons to large neural populations. We will use various modeling frameworks, adapted to the scale, from piecewise-deterministic Markov processes to mean-field formalism. The postdoc will perform a mathematical analysis of the models, extensive numerical simulations as well as data analysis using neural recordings from our experimental partners.
Rune W. Berg
The lab of Rune W. Berg is looking for a highly motivated and dynamic researcher for a 3-year position to start January 1st, 2024. The topic is the neuroscience of motor control with a focus on locomotion and spinal circuitry and connections with the brain. The person will be performing the following: 1) experimental recording of neurons in the brain and spinal cord of awake behaving rats using Neuropixels and Neuronexus electrodes combined with optogenetics. 2) Analyze the large amount of data generated from these experiments, including tissue processing. 3) Participate in the development of the new theory of motor control.
Helena Dalmau Felderhoff
The Max Planck Institutes for Biological Cybernetics and Intelligent Systems as well as the AI Center in Tübingen & Stuttgart (Germany) offer up to 10 students at the Bachelor or Master level paid three-months internships during the summer of 2024. Successful applicants will work with top-level scientists on research projects spanning machine learning, electrical engineering, theoretical neuroscience, behavioral experiments, robotics and data analysis. The CaCTüS Internship is aimed at young scientists who are held back by personal, financial, regional or societal constraints to help them develop their research careers and gain access to first-class education. The program is designed to foster inclusion, diversity, equity and access to excellent scientific facilities. We specifically encourage applications from students living in low- and middle-income countries which are currently underrepresented in the Max Planck Society research community.
Prof. Tatjana Tchumatchenko
Postdoc position: The postdoc candidate will be involved in a computational project addressing how neurons efficiently synthesize and distribute proteins in order to ensure that these are readily available across all synapses, will analyze data and model synaptic plasticity changes in order to understand health and disease states computationally. This work is centered on computational tools and includes pen-and-paper calculations, data analysis, and numerical simulations and requires an interdisciplinary mindset. PhD position: The PhD candidate will be conducting circuit level data analysis and modeling of neural activity states. He/she will contribute to the development of machine learning algorithms to analyse imaging data or to distinguish different behavioral activity states. This work is centered on dynamical systems methods, data analysis and numerical simulations and requires an interdisciplinary mindset. Master students interested in conducting Master thesis research (6-12 months) related to the two projects above a welcome to apply.
Mathieu Desroches
The aim of the project is to develop a multiscale model of Dravet syndrome, from ionic channels of interacting neurons to large neural populations. We will use various modeling frameworks, adapted to the scale, from piecewise-deterministic Markov processes to mean-field formalism. The postdoc will perform a mathematical analysis of the models, extensive numerical simulations as well as data analysis using neural recordings from our experimental partners.
Dr. Stefan Fürtinger
Research Software Developer (f/m/d): closely collaborate with resident research groups developing custom-tailored software applications for experimental data acquisition and analysis. Data processing is performed on premises using a local high-performance computing (HPC) cluster comprising multiple hardware architectures (x86, IBM Power, GPU). Main responsibilities include development of scientific software applications in Python, administration of on-premise software development platforms (GitLab, SVN, Perforce) and platform-specific code modifications and patch development for existing open-source analysis software. Linux System Administrator (f/m/d): maintain and tune the Linux infrastructure of our on-premise HPC cluster comprising multiple hardware architectures (x86, IBM Power, GPU). Main responsibilities include maintenance and day-to-day operations of HA cluster filesystems, support and troubleshooting covering HPC-related user-questions, optimizing cluster efficiency and performance.
Silvia Lopez-Guzman
The Unit on Computational Decision Neuroscience (CDN) at the National Institute of Mental Health is seeking a full-time Data Scientist/Data Analyst. The lab is focused on understanding the neural and computational bases of adaptive and maladaptive decision-making and their relationship to mental health. Current studies investigate how internal states lead to biases in decision-making and how this is exacerbated in mental health disorders. Our approach involves a combination of computational model-based tasks, questionnaires, biosensor data, fMRI, and intracranial recordings. The main models of interest come from neuroeconomics, reinforcement learning, Bayesian inference, signal detection, and information theory. The main tasks for this position include computational modeling of behavioral data from decision-making and other cognitive tasks, statistical analysis of task-based, clinical, physiological and neuroimaging data, as well as data visualization for scientific presentations, public communication, and academic manuscripts. The candidate is expected to demonstrate experience with best practices for the development of well-documented, reproducible programming pipelines for data analysis, that facilitate sharing and collaboration, and live up to our open-science philosophy, as well as to our data management and sharing commitments at NIH.
Numa Dancause, Paul Cisek
The postdoctoral trainees will be responsible for 1) developing and deploying automated approaches to process signals recorded in labs into analysis-ready datasets, and 2) creating a unified data storage and management framework to facilitate data sharing and collaborative, neuro-AI, analyses. They will advance cutting edge platforms for large-scale behavioral and neurophysiology experiments, participate in the advancement of open source in neuroscience, and work with unique electrophysiological datasets to develop novel or high-dimensional analytical tools.
Dr. Udo Ernst
In this project we want to study organization and optimization of flexible information processing in neural networks, with specific focus on the visual system. You will use network modelling, numerical simulation, and mathematical analysis to investigate fundamental aspects of flexible computation such as task-dependent coordination of multiple brain areas for efficient information processing, as well as the emergence of flexible circuits originating from learning schemes which simultaneously optimize for function and flexibility. These studies will be complemented by biophysically realistic modelling and data analysis in collaboration with experimental work done in the lab of Prof. Dr. Andreas Kreiter, also at the University of Bremen. Here we will investigate selective attention as a central aspect of flexibility in the visual system, involving task-dependent coordination of multiple visual areas.
Maximilian Riesenhuber, PhD
We have an opening for a postdoc position investigating the neural bases of deep multimodal learning in the brain. The project involves EEG and laminar 7T imaging (in collaboration with Dr. Peter Bandettini’s lab at NIMH) to test computational hypotheses for how the brain learns multimodal concept representations. Responsibilities of the postdoc include running EEG and fMRI experiments, data analysis and manuscript preparation. Georgetown University has a vibrant neuroscience community with over fifty labs participating in the Interdisciplinary Program in Neuroscience and a number of relevant research centers, including the new Center for Neuroengineering (cne.georgetown.edu). Interested candidates should submit a CV, a brief (1 page) statement of research interests, representative reprints, and the names and contact information of three references to Interfolio via https://apply.interfolio.com/148520. Faxed, emailed, or mailed applications will not be accepted. Questions about the position can be directed to Maximilian Riesenhuber (mr287@georgetown.edu).
Neurobiological constraints on learning: bug or feature?
Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.
Towards open meta-research in neuroimaging
When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
State-of-the-Art Spike Sorting with SpikeInterface
This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.
1.8 billion regressions to predict fMRI (journal club)
Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.
Exploring the Potential of High-Density Data for Neuropsychological Testing with Coregraph
Coregraph is a tool under development that allows us to collect high-density data patterns during the administration of classic neuropsychological tests such as the Trail Making Test and Clock Drawing Test. These tests are widely used to evaluate cognitive function and screen for neurodegenerative disorders, but traditional methods of data collection only yield sparse information, such as test completion time or error types. By contrast, the high-density data collected with Coregraph may contribute to a better understanding of the cognitive processes involved in executing these tests. In addition, Coregraph may potentially revolutionize the field of cognitive evaluation by aiding in the prediction of cognitive deficits and in the identification of early signs of neurodegenerative disorders such as Alzheimer's dementia. By analyzing high-density graphomotor data through techniques like manual feature engineering and machine learning, we can uncover patterns and relationships that would be otherwise hidden with traditional methods of data analysis. We are currently in the process of determining the most effective methods of feature extraction and feature analysis to develop Coregraph to its full potential.
Analyzing artificial neural networks to understand the brain
In the first part of this talk I will present work showing that recurrent neural networks can replicate broad behavioral patterns associated with dynamic visual object recognition in humans. An analysis of these networks shows that different types of recurrence use different strategies to solve the object recognition problem. The similarities between artificial neural networks and the brain presents another opportunity, beyond using them just as models of biological processing. In the second part of this talk, I will discuss—and solicit feedback on—a proposed research plan for testing a wide range of analysis tools frequently applied to neural data on artificial neural networks. I will present the motivation for this approach as well as the form the results could take and how this would benefit neuroscience.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
Toward an open science ecosystem for neuroimaging
It is now widely accepted that openness and transparency are keys to improving the reproducibility of scientific research, but many challenges remain to adoption of these practices. I will discuss the growth of an ecosystem for open science within the field of neuroimaging, focusing on platforms for open data sharing and open source tools for reproducible data analysis. I will also discuss the role of the Brain Imaging Data Structure (BIDS), a community standard for data organization, in enabling this open science ecosystem, and will outline the scientific impacts of these resources.
Experimental Neuroscience Bootcamp
This course provides a fundamental foundation in the modern techniques of experimental neuroscience. It introduces the essentials of sensors, motor control, microcontrollers, programming, data analysis, and machine learning by guiding students through the “hands on” construction of an increasingly capable robot. In parallel, related concepts in neuroscience are introduced as nature’s solution to the challenges students encounter while designing and building their own intelligent system.
Modern Approaches to Behavioural Analysis
The goal of neuroscience is to understand how the nervous system controls behaviour, not only in the simplified environments of the lab, but also in the natural environments for which nervous systems evolved. In pursuing this goal, neuroscience research is supported by an ever-larger toolbox, ranging from optogenetics to connectomics. However, often these tools are coupled with reductionist approaches for linking nervous systems and behaviour. This course will introduce advanced techniques for measuring and analysing behaviour, as well as three fundamental principles as necessary to understanding biological behaviour: (1) morphology and environment; (2) action-perception closed loops and purpose; and (3) individuality and historical contingencies [1]. [1] Gomez-Marin, A., & Ghazanfar, A. A. (2019). The life of behavior. Neuron, 104(1), 25-36
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
How neural circuits organize and learn during development
To generate brain circuits that are both flexible and stable requires the coordination of powerful developmental mechanisms acting at different scales, including activity-dependent synaptic plasticity and changes in single neuron properties. The brain prepares to efficiently compute information and reliably generate behavior during early development without any prior sensory experience but through patterned spontaneous activity. After the onset of sensory experience, ongoing activity continues to modify sensory circuits, and plays an important functional role in the mature brain. Using quantitative data analysis, experiment-driven theory and computational modeling, I will present a framework for how neural circuits are built and organized during early postnatal development into functional units, and how they are modified by intact and perturbed sensory-evoked activity. Inspired by experimental data from sensory cortex, I will then show how neural circuits use the resulting non-random connectivity to flexibly gate a network’s response, providing a mechanism for routing information.
A Flexible Platform for Monitoring Cerebellum-Dependent Sensory Associative Learning
Climbing fiber inputs to Purkinje cells provide instructive signals critical for cerebellum-dependent associative learning. Studying these signals in head-fixed mice facilitates the use of imaging, electrophysiological, and optogenetic methods. Here, a low-cost behavioral platform (~$1000) was developed that allows tracking of associative learning in head-fixed mice that locomote freely on a running wheel. The platform incorporates two common associative learning paradigms: eyeblink conditioning and delayed tactile startle conditioning. Behavior is tracked using a camera and the wheel movement by a detector. We describe the components and setup and provide a detailed protocol for training and data analysis. This platform allows the incorporation of optogenetic stimulation and fluorescence imaging. The design allows a single host computer to control multiple platforms for training multiple animals simultaneously.
Mesmerize: A blueprint for shareable and reproducible analysis of calcium imaging data
Mesmerize is a platform for the annotation and analysis of neuronal calcium imaging data. Mesmerize encompasses the entire process of calcium imaging analysis from raw data to interactive visualizations. Mesmerize allows you to create FAIR-functionally linked datasets that are easy to share. The analysis tools are applicable for a broad range of biological experiments and come with GUI interfaces that can be used without requiring a programming background.
Parametric control of flexible timing through low-dimensional neural manifolds
Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.
Maths, AI and Neuroscience meeting
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.
CaImAn: large-scale batch and online analysis of calcium imaging data
Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.
GuPPy, a Python toolbox for the analysis of fiber photometry data
Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.
From Sensors to Health Data Analysis
Talks and panel discussions around the LifeQ process of moving from the embedded engineering of sensors on edge devices to big health data analysis in the cloud.
Neural Population Dynamics for Skilled Motor Control
The ability to reach, grasp, and manipulate objects is a remarkable expression of motor skill, and the loss of this ability in injury, stroke, or disease can be devastating. These behaviors are controlled by the coordinated activity of tens of millions of neurons distributed across many CNS regions, including the primary motor cortex. While many studies have characterized the activity of single cortical neurons during reaching, the principles governing the dynamics of large, distributed neural populations remain largely unknown. Recent work in primates has suggested that during the execution of reaching, motor cortex may autonomously generate the neural pattern controlling the movement, much like the spinal central pattern generator for locomotion. In this seminar, I will describe recent work that tests this hypothesis using large-scale neural recording, high-resolution behavioral measurements, dynamical systems approaches to data analysis, and optogenetic perturbations in mice. We find, by contrast, that motor cortex requires strong, continuous, and time-varying thalamic input to generate the neural pattern driving reaching. In a second line of work, we demonstrate that the cortico-cerebellar loop is not critical for driving the arm towards the target, but instead fine-tunes movement parameters to enable precise and accurate behavior. Finally, I will describe my future plans to apply these experimental and analytical approaches to the adaptive control of locomotion in complex environments.
Space wrapped onto a grid cell torus
Entorhinal grid cells, so-called because of their hexagonally tiled spatial receptive fields, are organized in modules which, collectively, are believed to form a population code for the animal’s position. Here, we apply topological data analysis to simultaneous recordings of hundreds of grid cells and show that joint activity of grid cells within a module lies on a toroidal manifold. Each position of the animal in its physical environment corresponds to a single location on the torus, and each grid cell is preferentially active within a single “field” on the torus. Toroidal firing positions persist between environments, and between wakefulness and sleep, in agreement with continuous attractor models of grid cells.
Learning the structure and investigating the geometry of complex networks
Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.
Understanding neural dynamics in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: (1) how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; (2) how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; (3) deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; (4) algorithmic approaches for simplifying deep network models of perception; (5) optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
From genetics to neurobiology through transcriptomic data analysis
Over the past years, genetic studies have uncovered hundreds of genetic variants to be associated with complex brain disorders. While this really represents a big step forward in understanding the genetic etiology of brain disorders, the functional interpretation of these variants remains challenging. We aim to help with the functional characterization of variants through transcriptomic data analysis. For instance, we rely on brain transcriptome atlases, such as Allen Brain Atlases, to infer functional relations between genes. One example of this is the identification of signaling mechanisms of steroid receptors. Further, by integrating brain transcriptome atlases with neuropathology and neuroimaging data, we identify key genes and pathways associated with brain disorders (e.g. Parkinson's disease). With technological advances, we can now profile gene expression in single-cells at large scale. These developments have presented significant computational developments. Our lab focuses on developing scalable methods to identify cells in single-cell data through interactive visualization, scalable clustering, classification, and interpretable trajectory modelling. We also work on methods to integrate single-cell data across studies and technologies.
An open-source experimental framework for automation of cell biology experiments
Modern biological methods often require a large number of experiments to be conducted. For example, dissecting molecular pathways involved in a variety of biological processes in neurons and non-excitable cells requires high-throughput compound library or RNAi screens. Another example requiring large datasets - modern data analysis methods such as deep learning. These have been successfully applied to a number of biological and medical questions. In this talk we will describe an open-source platform allowing such experiments to be automated. The platform consists of an XY stage, perfusion system and an epifluorescent microscope with autofocusing. It is extremely easy to build and can be used for different experimental paradigms, ranging from immunolabeling and routine characterisation of large numbers of cell lines to high-throughput imaging of fluorescent reporters.
Mice alternate between discrete strategies during perceptual decision-making
Classical models of perceptual decision-making assume that animals use a single, consistent strategy to integrate sensory evidence and form decisions during an experiment. In this talk, I aim to convince you that this common view is incorrect. I will show results from applying a latent variable framework, the “GLM-HMM”, to hundreds of thousands of trials of mouse choice data. Our analysis reveals that mice don’t lapse. Instead, mice switch back and forth between engaged and disengaged behavior within a single session, and each mode of behavior lasts tens to hundreds of trials.
Reproducible EEG from raw data to publication figures
In this talk I will present recent developments in data sharing, organization, and analyses that allow to build fully reproducible workflows. First, I will present the Brain Imaging Data structure and discuss how this allows to build workflows, showing some new tools to read/import/create studies from EEG data structured that way. Second, I will present several newly developed tools for reproducible pre-processing and statistical analyses. Although it does take some extra effort, I will argue that it largely feasible to make most EEG data analysis fully reproducible.
Panel discussion: Practical advice for reproducibility in neuroscience
This virtual, interactive panel on reproducibility in neuroscience will focus on practical advice that researchers at all career stages could implement to improve the reproducibility of their work, from power analyses and pre-registering reports to selecting statistical tests and data sharing. The event will comprise introductions of our speakers and how they came to be advocates for reproducibility in science, followed by a 25-minute discussion on reproducibility, including practical advice for researchers on how to improve their data collection, analysis, and reporting, and then 25 minutes of audience Q&A. In total, the event will last one hour and 15 minutes. Afterwards, some of the speakers will join us for an informal chat and Q&A reserved only for students/postdocs.
Biomedical Image and Genetic Data Analysis with machine learning; applications in neurology and oncology
In this presentation I will show the opportunities and challenges of big data analytics with AI techniques in medical imaging, also in combination with genetic and clinical data. Both conventional machine learning techniques, such as radiomics for tumor characterization, and deep learning techniques for studying brain ageing and prognosis in dementia, will be addressed. Also the concept of deep imaging, a full integration of medical imaging and machine learning, will be discussed. Finally, I will address the challenges of how to successfully integrate these technologies in daily clinical workflow.
Theoretical and computational approaches to neuroscience with complex models in high dimensions across multiple timescales: from perception to motor control and learning
Remarkable advances in experimental neuroscience now enable us to simultaneously observe the activity of many neurons, thereby providing an opportunity to understand how the moment by moment collective dynamics of the brain instantiates learning and cognition. However, efficiently extracting such a conceptual understanding from large, high dimensional neural datasets requires concomitant advances in theoretically driven experimental design, data analysis, and neural circuit modeling. We will discuss how the modern frameworks of high dimensional statistics and deep learning can aid us in this process. In particular we will discuss: how unsupervised tensor component analysis and time warping can extract unbiased and interpretable descriptions of how rapid single trial circuit dynamics change slowly over many trials to mediate learning; how to tradeoff very different experimental resources, like numbers of recorded neurons and trials to accurately discover the structure of collective dynamics and information in the brain, even without spike sorting; deep learning models that accurately capture the retina’s response to natural scenes as well as its internal structure and function; algorithmic approaches for simplifying deep network models of perception; optimality approaches to explain cell-type diversity in the first steps of vision in the retina.
Machine learning methods applied to dMRI tractography for the study of brain connectivity
Tractography datasets, calculated from dMRI, represent the main WM structural connections in the brain. Thanks to advances in image acquisition and processing, the complexity and size of these datasets have constantly increased, also containing a large amount of artifacts. We present some examples of algorithms, most of them based on classical machine learning approaches, to analyze these data and identify common connectivity patterns among subjects.
African Neuroscience: Current Status and Prospects
Understanding the function and dysfunction of the brain remains one of the key challenges of our time. However, an overwhelming majority of brain research is carried out in the Global North, by a minority of well-funded and intimately interconnected labs. In contrast, with an estimated one neuroscientist per million people in Africa, news about neuroscience research from the Global South remains sparse. Clearly, devising new policies to boost Africa’s neuroscience landscape is imperative. However, the policy must be based on accurate data, which is largely lacking. Such data must reflect the extreme heterogeneity of research outputs across the continent’s 54 countries. We have analysed all of Africa’s Neuroscience output over the past 21 years and uniquely verified the work performed in African laboratories. Our unique dataset allows us to gain accurate and in-depth information on the current state of African Neuroscience research, and to put it into a global context. The key findings from this work and recommendations on how African research might best be supported in the future will be discussed.
Physics of Behavior: Now that we can track (most) everything, what can we do with the data?
We will organize the workshop around one question: “Now that we can track (most) everything, what can we do with the data?” Given the recent dramatic advances in technology, we now have behavioral data sets with orders of magnitude more accuracy, dimensionality, diversity, and size than we had even a few years ago. That being said, there is still little agreement as to what theoretical frameworks can inform our understanding of these data sets and suggest new experiments we can perform. We hope that after this workshop we’ll see a variety of new ideas and perhaps gain some inspiration. We have invited eight speakers, each studying different systems, scales, and topics, to provide 10 minute presentations focused on the above question, with another 10 minutes set aside for questions/discussions (moderated by the two of us). Although we naturally expect speakers to include aspects of their own work, we have encouraged all of them to think broadly and provocatively. We are also hoping to organize some breakout sessions after the talks so that we can have some more expanded discussions about topics arising during the meeting.
A Bayesian hierarchical latent variable model for spike train data analysis
COSYNE 2023
Neuroformer: A Transformer Framework for Multimodal Neural Data Analysis
COSYNE 2023
Meta-Dynamical State Space Models for Integrative Neural Data Analysis
COSYNE 2025
An in-depth investigation of motor and non-motor symptoms using diffusion tensor imaging (DTI) measures in Parkinson's disease (PD) patients: A PPMI data analysis
FENS Forum 2024
Streamlining electrophysiology data analysis: A Python-based workflow for efficient integration and processing
FENS Forum 2024
Topological data analysis of cortical word representations in health and schizophrenia
FENS Forum 2024
Topological data analysis reveals brain connectivity differences between schizophrenia subjects and healthy controls
FENS Forum 2024