Python
Python
Dr Adam Tyson
We are recruiting a Research Software Engineer to work between the Sainsbury Wellcome Centre (SWC) Neuroinformatics Unit (NIU) and Advanced Microscopy facilities (AMF) to provide support for the analysis of light microscopy datasets. The AMF provides support for all light microscopy at SWC including both in-vivo functional imaging within research laboratories and histological imaging performed within the facility itself. The facility provides histology and tissue clearing as a service, along with custom-built microscopes (lightsheet and serial-section two-photon) for whole-brain imaging. The NIU (https://neuroinformatics.dev) is a Research Software Engineering team dedicated to the development of high quality, accurate, robust, easy to use and maintainable open-source software for neuroscience and machine learning. We collaborate with researchers and other software engineers to advance research in the two research centres and make new algorithms and tools available to the global community. The NIU leads the development of the BrainGlobe (https://brainglobe.info/) computational neuroanatomy initiative. The aim of this position is to work in both the NIU and AMF to develop data processing and analysis software and advise researchers how best to analyse their data. The postholder will be embedded within both teams to help optimise all steps in the software development process.
Tanya Brown
As part of an externally funded project with members of the Cogitate Consortium, we are seeking to hire a Data Scientist or Scientific Software Engineer, ideally one with a background in research data management (RDM), FAIR data, and database administration, who will contribute to establishing the data architecture infrastructure for open and reusable data, generate experimental code, and advance the development of reproducible neuroscience tools and processing pipelines in interdisciplinary research projects. The position will involve creating tools for the efficient organization and exploration of openly shared raw and processed datasets. It is also ideal for networking in the open science community as it includes interaction with the open (neuro)science community; and it will be a unique opportunity for someone keen to contribute to the development of open science and large-scale collaborations, as well as to community efforts and dissemination. Your tasks --Preparing and reviewing data for open share with the community; --Developing, testing and implementing scientific software, i.e., reproducible analysis pipelines and data storage for open science building on the BIDS standard; --Reviewing code for reproducible pipelines; --Writing supporting materials and documentation for researcher end users; --Assisting staff with parallelizing scientific software and in the use of cluster and cloud computation; --Providing support and training for data management; --Liaising between the lab and the Institute’s core IT team; --Exchanging and networking within national (NFDI, MPDL, etc.) and international (RDA, EOSC, etc.) initiatives. Our offer We offer an exciting interdisciplinary field of engagement in an international scientific environment. The Institute is located in an attractive location with excellent infrastructure in Frankfurt’s Westend neighborhood. You can expect a modern, well-equipped workplace with flexible working hours (some remote working is possible) and the opportunity to participate in (international) conferences and project meetings. Further development of your personal strengths, e.g., through direct interactions with researchers forming part of the Cogitate Consortium (e.g., Christof Koch, Giulio Tononi, Stanislas Dehaene, Gabriel Kreiman, Ole Jensen, Sylvain Baillet, among many others) is possible. The position will begin earliest on May 1, 2022 and is initially limited to 18 months, with the possibility of an extension pending funding approval. Salary is paid in accordance with the collective agreement for the public sector (TVöD Bund), according to your qualifications and experience. The Max Planck Society strives for gender equality and diversity. We are also committed to increasing the number of individuals with disabilities in our workforce. Therefore, applicants of all backgrounds are welcome. Your application Your application should include: your detailed CV (including details of your educational background and skills); a cover letter that explains why this position interests you and how your skills and abilities are suitable; copies of relevant degrees and/or certificates. Please send these materials all together in a single PDF file, before April 1, 2022, by e-mail to job@ae.mpg.de using the code “TWCF Research Data” in the subject line. Please feel free to contact Tanya Brown (tanya.brown@ae.mpg.de) if you have any questions about the position.
Tanya Brown
As part of an external funded project in collaboration with Dan Marcus, Wash. U; Sean Hill, CAMH; and members of the Cogitate Consortium, we are searching for a Scientific Data Engineer to contribute to the development of generic, cross-domain metadata management framework to foster the reuse of open datasets as well as reproducibility of cognitive neuroscience datasets i.e., metadata describing the experimental context of studies employing fMRI, MEEG, and ECoG. The successful candidate will also heavily contribute to the development of data infrastructure including storage, streaming and analysis tools for reproducible science based on the BIDS standard. Efforts will be devoted to develop tools for an efficient organization and exploration of raw and processed datasets. The position is ideal for networking in the open science community as it includes interaction with the open (neuro)science community; and is a unique opportunity for someone keen to contribute to the development of open-science and large-scale collaborations and aiming to contribute to community efforts and dissemination. Your tasks in interdisciplinary research projects focus on: • Review of existing approaches and tools, Requirement specification, Conceptual design, blueprint of implementation, Proof-of-concept application to Cognitive Neuroscience • Presentation and publication of the results • Exchange and networking within national (NFDI, MPDL) and international (RDA, EOSC) initiatives • Developing, testing and implementing scientific software i.e., standardized neuroscience data acquisition, reproducible analysis pipelines and data storage for open science building on the BIDS standard • Providing training and support for students and postdocs at varied levels of competence in modern, high quality, open science/source coding practices • Providing support and training for data management • Being the lab’s interface with the Institute’s core IT team
Dr. Jessica Ausborn
Dr. Jessica Ausborn’s group at Drexel University College of Medicine, in the Department of Neurobiology & Anatomy has a postdoctoral position available for an exciting new research project involving computational models of sensorimotor integration based on neural and behavior data in Drosophila. The interdisciplinary collaboration with the experimental group of Dr. Katie von Reyn (School of Biomedical Engineering) will involve a variety of computational techniques including the development of biophysically detailed and more abstract mathematical models together with machine learning and data science techniques to identify and describe the algorithms computed in neuronal pathways that perform sensorimotor transformations. The Ausborn laboratory is part of an interdisciplinary group of Drexel’s Neuroengineering program that includes computational and experimental investigators. This collaborative, interdisciplinary environment enables us to probe biological systems in a way that would not be possible with either an exclusively experimental or computational approach. Applicants should forward a cover letter, curriculum vitae, statement of research interests, and contact information of three references to Jessica Ausborn (ja696@drexel.edu). Salary will be commensurate with experience based on NIH guidelines.
Prof. Dr. Jakob Macke
We have several openings for Postdoctoral Researchers to work at the intersection of Machine Learning and Computational Neuroscience funded by the ERC Consolidator Grant “DeepCoMechTome: Using deep learning to understand computations in neural circuits with Connectome-constrained Mechanistic Models“. The goal of DeepCoMechTome is to develop simulation-based machine learning tools that will make it possible to build neural network models that are both biologically realistic and computationally powerful.
Bei Xiao
The RA is to pursue research projects of his/her own as well as provide support for research carried out in the Xiao lab. Possible duties include: Building VR/AR experimental interfaces with Unity3D, Python coding for behavioral data analysis, Collecting data for psychophysical experiments, Training machine learning models.
Silvio P. Sabatini
The position is a full-time PhD studentship for a period of 3 years, starting on Nov 1st, 2023. The research project is titled 'Early vision function in silico networks of LIF neurons'. The project aims to develop an 'artificial observer' composed of an active event-based camera feeding a neuromorphic multi-layer network of leaky integrate and fire (LIF) neurons. The system should provide the inference engines for relating visual representations to performance on perceptual judgement tasks. Multiple and varying parameters captured under complex, real-life conditions should be comparatively assessed in silicon and human observers. The research will be conducted at the Bioengineering/PSPC labs of DIBRIS.
Chris Reinke
The internship aims to develop a controller for a social mobile robot to have a conversation with people using large language models (LLMs) such as ChatGPT. The internship is part of the European SPRING project, which aims to develop mobile robots for healthcare environments. The intern will develop a controller (Python, ROS) for ARI, the social robot. The controller will navigate towards a human (or group), have a conversation with them, and leave the conversation. The intern will use existing components from the SPRING project such as mapping and localization of the robot and humans, human-aware navigation, speech recognition, and a simple dialogue system based on ChatGPT. The intern will also investigate how to optimally use LLMs such as ChatGPT for natural and comfortable conversation with the robot, for example, by using prompt engineering. The intern will have the chance to develop and implement their own ideas to improve the conversation with the robot, for example, by investigating gaze, gestures, or emotions.
Xavier Alameda-Pineda
The internship aims to explore the usefulness of the Fisher-Ráo metric combined with deep probabilistic models. The main question is whether or not this metric has some relationship with the training of deep generative models. In plain, we would like to understand if the training and/or fine-tuning of such probabilistic models follow optimal paths on the manifold of probability distributions. Your task will be to design and implement an experimental framework allowing to measure what kind of paths are followed on the manifold of probability distributions when such deep probabilistic models are trained. To that aim, one must first be able to measure distances in this manifold, and here is where the Fisher-Ráo metric comes in the game. The candidate does not need to be familiar with the specific concepts of Fisher-Ráo metric, but needs to be open to learning new mathematical concepts. The implementation of these experiments will require knowledge in Python and in PyTorch.
Dr Elliot J. Crowley
We have a position for a postdoctoral research associate to work on NAS and AutoML with Dr Elliot J. Crowley and the Bayesian and Neural Systems Group. The successful applicant will be based in the School of Engineering at the University of Edinburgh, and will have opportunities for collaboration within and outside of the school e.g. with colleagues in the Institute for Digital Communications and the Bayesian and Neural Systems Group. This position is funded for 24 months (provisional start date: November 2023) and the salary is UE07 £36,333 - £43,155 Per Annum.
Justus Piater, Antonio Rodríguez-Sánchez, Samuele Tosatto
This is a university doctoral position that involves minor teaching duties. The precise research topics are negotiable within the scope of active research at IIS, including machine learning and growing levels of AI for computer vision and robotics. Of particular interest are topics in representation learning and causality for out-of-distribution situations.
Jun.-Prof. Dr.-Ing. Rania Rayyes
The main focus of this position is to develop novel AI systems and methods for robot applications: Dexterous robot grasping, Human-robot learning, Transfer learning – efficient online learning. The role offers close cooperation with other institutes, universities, and numerous industrial partners, a self-determined development environment for own research topics with active support for the doctorate research project, flexible working hours, and work in a young, interdisciplinary research team.
Sam Neymotin
Postdoctoral scientist positions are available at the Nathan Kline Institute (NKI) for Psychiatric Research to work on computational neuroscience research funded by recently awarded NIH and DoD grants. Our NIH-funded projects investigate the brain's dynamic circuit motifs underlying internal vs. external-oriented processes in the auditory and interconnected areas, using circuit modeling of the thalamocortical system. In this project, the postdoc will build data-driven biophysical models constrained by data collected from electrophysiology labs at NKI and Columbia & The Feinstein Institutes for Medical Research, and then use the models to predict optimal neuromodulation strategies for inducing/suppressing circuit patterns, testable in vivo. Our DoD project involves developing computational models of the hippocampal and entorhinal cortex circuitry used in spatial navigation, higher level decision making circuits, and integrating the models with agents learning to solve navigation tasks using neurobiologically-inspired learning rules. This project includes mathematicians and robotics researchers at UTK and CMU.
Sam Neymotin
Postdoctoral scientist positions are available at the Nathan Kline Institute (NKI) for Psychiatric Research to work on computational neuroscience research funded by NIH and DoD grants. Applicants should have a PhD in computational neuroscience (or a related field), strong background in multiscale modeling using NEURON/NetPyNE, Python software development, neural/electrophysiology data analysis, machine learning, and writing/presenting research.
Dr. Ziad Nahas
Dr. Ziad Nahas (Interventional Psychiatry Lab) in the University of Minnesota Department of Psychiatry and Behavioral Sciences is seeking an outstanding candidate for a postdoctoral position to conduct and analyze the effects of neuromodulation on brain activity in mood disorders. Candidates should be passionate about advancing knowledge in the area of translational research of depressive disorders and other mental health conditions with a focus on invasive and non-invasive brain stimulation treatments. The position is available June 1, 2023, and funding is available for at least two years.
Christian Leibold
The lab of Christian Leibold invites applications of postdoc candidates on topics related to the geometry of neural manifolds. We will use spiking neural network simulations, analysis of massively parallel recordings, as well as techniques from differential geometry to understand the dynamics of the neural population code in the hippocampal formation in relation to complex cognitive behaviors. Our research group combines modelling of neural circuits with the development of machine learning techniques for data analysis. We strive for a diverse, interdisciplinary, and collaborative work environment.
Dr. Ziad Nahas
Dr. Ziad Nahas (Interventional Psychiatry Lab) in the University of Minnesota Department of Psychiatry and Behavioral Sciences is seeking an outstanding candidate for a postdoctoral position to conduct and analyze the effects of neuromodulation on brain activity in mood disorders. Candidates should be passionate about advancing knowledge in the area of translational research of depressive disorders and other mental health conditions with a focus on invasive and non-invasive brain stimulation treatments. The position is available June 1, 2023, and funding is available for at least two years.
Sahar Moghimi
The post-doc/PhD will be fully dedicated to extracting the EEG correlates of rhythm processing in the course of development, aiming to extract the neural response to different rhythmic characteristics, and to evaluate the impact of musical interventions on neurodevelopment. The project aims to evaluate the development of rhythm perception starting from the third trimester of gestation into infancy, and the impact of early musical interventions in the NICU on preterm infants’ development. In this cross-sectional and longitudinal study, we will evaluate the development of auditory rhythm processing capacities with EEG, and behavioral protocols.
Vivek
We are looking for a potential candidate for the Python Based development of the Software BrainX3. BrainX3 (https://www.brainx3.com/) is an interactive platform for the 3D visualisation, analysis and simulation of human neuroimaging data. In particular, we focus on volumetric MRI data, DTI/DSI tractography data, EEG/MEG and semantic corpora from available text databases. BrainX3 provide a means to organise and visualise how the above data types could be combined to extract meaningful insights about brain structures and pathways that might be useful for the scientific and clinical communities. We are looking for a highly motivated individual for a Research Assistant / PhD / PostDoc position to work on software development and further improve the functionality in line with Neuroscience. We’re looking for individuals with a background in computer science or with an experience with python libraries for visualisation, e.g. VTK and QT5.
Fabrice Wallois
The main objective of this project is to characterize the endogenous generators underlying the emergence of sensory capacities and to characterize their associated functional connectivity. This will be done retrospectively on our High Resolution EEG database in premature neonates from 24 weeks of gestational age, which is the largest database worldwide. We will also use the OPM pediatric MEG, which is being set up in Amiens. This study will allow us to characterize the establishment of sensory networks before the modulation of cortical activity by external sensory information. The PhD candidate will be concentrated on developing advance signal processing approached using the already available datasets on HR EEG and MEG, for characterization of spontaneous neural oscillations and analysis of functional connectivity.
Sahar Moghimi
The consortium of the projects aims to evaluate the development of rhythm perception starting from the third trimester of gestation into infancy, and the impact of early musical interventions in the NICU on preterm infants’ development. In these cross-sectional and longitudinal studies, we will evaluate the development of auditory rhythm processing capacities with EEG, and behavioral protocols. The project consortium involves four academic partners with complementary expertise in early neurodevelopment, cognitive neurosciences of music, neural data processing (in particular EEG), and music analysis. The aim is to put together a cross-disciplinary team that together covers the following methods: protocol design and implementation, EEG signal processing, behavioral studies, video analysis, statistics, machine learning.
Max Garagnani
The project involves implementing a brain-realistic neurocomputational model able to exhibit the spontaneous emergence of cognitive function from a uniform neural substrate, as a result of unsupervised, biologically realistic learning. Specifically, it will focus on modelling the emergence of unexpected (i.e., non stimulus-driven) action decisions using neo-Hebbian reinforcement learning. The final deliverable will be an artificial brain-like cognitive architecture able to learn to act as humans do when driven by intrinsic motivation and spontaneous, exploratory behaviour.
Constantine Dovrolis
The Cyprus Institute invites applications for a Post-Doctoral Fellow to pursue research in Machine Learning. The successful candidate will be actively engaged in cutting-edge research in terms of core problems in ML and AI such as developing efficient and interpretable deep nets, continual learning, neuro-inspired ML, self-supervised learning, and other cutting-edge topics. The candidate should have deep understanding of machine learning fundamentals (e.g., linear algebra, probability theory, optimization) as well as broad knowledge of the state-of-the-art in AI and machine and learning. Additionally, the candidate should have extensive experience with ML programming frameworks (e.g., PyTorch). The candidate will be working primarily with two PIs: Prof. Constantine Dovrolis and Prof. Mihalis Nicolaou. The appointment is for a period of 2 years, with the option of renewal subject to performance and the availability of funds.
Xavier Hinaut
The main objectives of the internship will be: 1. to develop a graphical interface to train vocalization annotation models, to visualize their performance and to re-annotate parts of the dataset accordingly (in a similar fashion as semi-supervised learning); 2. to develop the corresponding software backend: data management (audio and annotations), serving and local persistence of the models (MLOps); 3. to collaborate with the project members to define the needs, establish the specifications or integrate pre-existing tools. This objective also implies collaborating with international researchers, and making an open source tool available to the public. The development will be incremental: a first prototype will allow to train models and to present their evaluation on the interface. A second prototype will offer advanced editing possibilities of the dataset (re-annotation of parts of the audio according to the results of the model), and the final version will integrate advanced analysis tools (dataset errors detection, spectrograms dimensionality reduction for visualization and/or clustering, syntactic analysis of song sequences, ...)
Dr Adam Tyson
We are recruiting a Research Software Engineer to work between the Sainsbury Wellcome Centre (SWC) Neuroinformatics Unit (NIU) and Advanced Microscopy facilities (AMF) to provide support for the analysis of light microscopy datasets. The AMF provides support for all light microscopy at SWC including both in-vivo functional imaging within research laboratories and histological imaging performed within the facility itself. The facility provides histology and tissue clearing as a service, along with custom-built microscopes (lightsheet and serial-section two-photon) for whole-brain imaging. The NIU (https://neuroinformatics.dev) is a Research Software Engineering team dedicated to the development of high quality, accurate, robust, easy to use and maintainable open-source software for neuroscience and machine learning. We collaborate with researchers and other software engineers to advance research in the two research centres and make new algorithms and tools available to the global community. The NIU leads the development of the BrainGlobe (https://brainglobe.info/) computational neuroanatomy initiative. The aim of this position is to work in both the NIU and AMF to develop data processing and analysis software and advise researchers how best to analyse their data. The postholder will be embedded within both teams to help optimise all steps in the software development process.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
Open SPM: A Modular Framework for Scanning Probe Microscopy
OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.
Understanding reward-guided learning using large-scale datasets
Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.
“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”
Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.
Automated generation of face stimuli: Alignment, features and face spaces
I describe a well-tested Python module that does automated alignment and warping of faces images, and some advantages over existing solutions. An additional tool I’ve developed does automated extraction of facial features, which can be used in a number of interesting ways. I illustrate the value of wavelet-based features with a brief description of 2 recent studies: perceptual in-painting, and the robustness of the whole-part advantage across a large stimulus set. Finally, I discuss the suitability of various deep learning models for generating stimuli to study perceptual face spaces. I believe those interested in the forensic aspects of face perception may find this talk useful.
Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks
Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
Introducing dendritic computations to SNNs with Dendrify
Current SNNs studies frequently ignore dendrites, the thin membranous extensions of biological neurons that receive and preprocess nearly all synaptic inputs in the brain. However, decades of experimental and theoretical research suggest that dendrites possess compelling computational capabilities that greatly influence neuronal and circuit functions. Notably, standard point-neuron networks cannot adequately capture most hallmark dendritic properties. Meanwhile, biophysically detailed neuron models are impractical for large-network simulations due to their complexity, and high computational cost. For this reason, we introduce Dendrify, a new theoretical framework combined with an open-source Python package (compatible with Brian2) that facilitates the development of bioinspired SNNs. Dendrify, through simple commands, can generate reduced compartmental neuron models with simplified yet biologically relevant dendritic and synaptic integrative properties. Such models strike a good balance between flexibility, performance, and biological accuracy, allowing us to explore dendritic contributions to network-level functions while paving the way for developing more realistic neuromorphic systems.
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
Pynapple: a light-weight python package for neural data analysis - webinar + tutorial
In systems neuroscience, datasets are multimodal and include data-streams of various origins: multichannel electrophysiology, 1- or 2-p calcium imaging, behavior, etc. Often, the exact nature of data streams are unique to each lab, if not each project. Analyzing these datasets in an efficient and open way is crucial for collaboration and reproducibility. In this combined webinar and tutorial, Adrien Peyrache and Guillaume Viejo will present Pynapple, a Python-based data analysis pipeline for systems neuroscience. Designed for flexibility and versatility, Pynapple allows users to perform cross-modal neural data analysis via a common programming approach which facilitates easy sharing of both analysis code and data.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
NMC4 Short Talk: Rank similarity filters for computationally-efficient machine learning on high dimensional data
Real world datasets commonly contain nonlinearly separable classes, requiring nonlinear classifiers. However, these classifiers are less computationally efficient than their linear counterparts. This inefficiency wastes energy, resources and time. We were inspired by the efficiency of the brain to create a novel type of computationally efficient Artificial Neural Network (ANN) called Rank Similarity Filters. They can be used to both transform and classify nonlinearly separable datasets with many datapoints and dimensions. The weights of the filters are set using the rank orders of features in a datapoint, or optionally the 'confusion' adjusted ranks between features (determined from their distributions in the dataset). The activation strength of a filter determines its similarity to other points in the dataset, a measure based on cosine similarity. The activation of many Rank Similarity Filters transforms samples into a new nonlinear space suitable for linear classification (Rank Similarity Transform (RST)). We additionally used this method to create the nonlinear Rank Similarity Classifier (RSC), which is a fast and accurate multiclass classifier, and the nonlinear Rank Similarity Probabilistic Classifier (RSPC), which is an extension to the multilabel case. We evaluated the classifiers on multiple datasets and RSC is competitive with existing classifiers but with superior computational efficiency. Code for RST, RSC and RSPC is open source and was written in Python using the popular scikit-learn framework to make it easily accessible (https://github.com/KatharineShapcott/rank-similarity). In future extensions the algorithm can be applied to hardware suitable for the parallelization of an ANN (GPU) and a Spiking Neural Network (neuromorphic computing) with corresponding performance gains. This makes Rank Similarity Filters a promising biologically inspired solution to the problem of efficient analysis of nonlinearly separable data.
GuPPy, a Python toolbox for the analysis of fiber photometry data
Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.
Fundamentals of PyTorch: Building a Model Step-by-Step
In this workshop you'll learn the fundamentals of PyTorch using an incremental, from-first-principles approach. We'll start with tensors, autograd, and the dynamic computation graph, and then move on to developing and training a simple model using PyTorch's model classes, datasets, data loaders, optimizers, and more. You should be comfortable using Python, Jupyter notebooks, Google Colab, Numpy and, preferably, object oriented programming.
Autopilot v0.4.0 - Distributing development of a distributed experimental framework
Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.
Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning
Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.
SpikeInterface
Much development has been directed toward improving the performance and automation of spike sorting. This continuous development, while essential, has contributed to an over-saturation of new, incompatible tools that hinders rigorous benchmarking and complicates reproducible analysis. To address these limitations, we developed SpikeInterface, a Python framework designed to unify preexisting spike sorting technologies into a single codebase and to facilitate straightforward comparison and adoption of different approaches. With a few lines of code, researchers can reproducibly run, compare, and benchmark most modern spike sorting algorithms; pre-process, post-process, and visualize extracellular datasets; validate, curate, and export sorting outputs; and more. In this presentation, I will provide an overview of SpikeInterface and, with applications to real and simulated datasets, demonstrate how it can be utilized to reduce the burden of manual curation and to more comprehensively benchmark automated spike sorters.
Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses
There is little consensus on the level of spatial complexity at which dendrites operate. On the one hand, emergent evidence indicates that synapses cluster at micrometer spatial scales. On the other hand, most modelling and network studies ignore dendrites altogether. This dichotomy raises an urgent question: what is the smallest relevant spatial scale for understanding dendritic computation? We have developed a method to construct compartmental models at any level of spatial complexity. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models. Thus, we are able to systematically construct passive as well as active dendrite models at varying degrees of spatial complexity. We evaluate which elements of the dendritic computational repertoire are captured by these models. We show that many canonical elements of the dendritic computational repertoire can be reproduced with few compartments. For instance, for a model to behave as a two-layer network, it is sufficient to fit a reduced model at the soma and at locations at the dendritic tips. In the basal dendrites of an L2/3 pyramidal model, we reproduce the backpropagation of somatic action potentials (APs) with a single dendritic compartment at the tip. Further, we obtain the well-known Ca-spike coincidence detection mechanism in L5 Pyramidal cells with as few as eleven compartments, the requirement being that their spacing along the apical trunk supports AP backpropagation. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Consequently, when the average conductance load on distal synapses is constant, the dendritic tree can be simplified while appropriately decreasing synaptic weights. When the conductance level fluctuates strongly, for instance through a-priori unpredictable fluctuations in NMDA activation, a constant weight rescale factor cannot be found, and the dendrite cannot be simplified. We have created an open source Python toolbox (NEAT - https://neatdend.readthedocs.io/en/latest/) that automatises the simplification process. A NEST implementation of the reduced models, currently under construction, will enable the simulation of few-compartment models in large-scale networks, thus bridging the gap between cellular and network level neuroscience.
Kilosort
Kilosort is a spike sorting pipeline for large-scale electrophysiology. Advances in silicon probe technology mean that in vivo electrophysiological recordings from hundreds of channels will soon become commonplace. To interpret these recordings we need fast, scalable and accurate methods for spike sorting, whose output requires minimal time for manual curation. Kilosort is a spike sorting framework that meets these criteria, and show that it allows rapid and accurate sorting of large-scale in vivo data. Kilosort models the recorded voltage as a sum of template waveforms triggered on the spike times, allowing overlapping spikes to be identified and resolved. Rapid processing is achieved thanks to a novel low-dimensional approximation for the spatiotemporal distribution of each template, and to batch-based optimization on GPUs. Kilosort is an important step towards fully automated spike sorting of multichannel electrode recordings, and is freely available.
BrainGlobe: a Python ecosystem for computational (neuro)anatomy
Neuroscientists routinely perform experiments aimed at recording or manipulating neural activity, uncovering physiological processes underlying brain function or elucidating aspects of brain anatomy. Understanding how the brain generates behaviour ultimately depends on merging the results of these experiments into a unified picture of brain anatomy and function. We present BrainGlobe, a new initiative aimed at developing common Python tools for computational neuroanatomy. These include cellfinder for fast, accurate cell detection in whole-brain microscopy images, brainreg for aligning images to a reference atlas, and brainrender for visualisation of anatomically registered data. These software packages are developed around the BrainGlobe Atlas API. This API provides a common Python interface to download and interact with reference brain atlases from multiple species (including human, mouse and larval zebrafish). This allows software to be developed agnostic to the atlas and species, increasing adoption and interoperability of software tools in neuroscience.
DeepLabStream
DeepLabStream is a python based multi-purpose tool that enables the realtime tracking and manipulation of animals during ongoing experiments. Our toolbox was orginally adapted from the previously published DeepLabCut (Mathis et al., 2018) and expanded on its core capabilities, but is now able to utilize a variety of different network architectures for online pose estimation (SLEAP, DLC-Live, DeepPosekit's StackedDenseNet, StackedHourGlass and LEAP). Our aim is to provide an open-source tool that allows researchers to design custom experiments based on real-time behavior-dependent feedback. My personal ideal goal would be a swiss-army knife like solution where we could integrate the many brilliant python interfaces. We are constantly upgrading DLStream with new features and integrate other open-source solutions.
Using evolutionary algorithms to explore single-cell heterogeneity and microcircuit operation in the hippocampus
The hippocampus-entorhinal system is critical for learning and memory. Recent cutting-edge single-cell technologies from RNAseq to electrophysiology are disclosing a so far unrecognized heterogeneity within the major cell types (1). Surprisingly, massive high-throughput recordings of these very same cells identify low dimensional microcircuit dynamics (2,3). Reconciling both views is critical to understand how the brain operates. " "The CA1 region is considered high in the hierarchy of the entorhinal-hippocampal system. Traditionally viewed as a single layered structure, recent evidence has disclosed an exquisite laminar organization across deep and superficial pyramidal sublayers at the transcriptional, morphological and functional levels (1,4,5). Such a low-dimensional segregation may be driven by a combination of intrinsic, biophysical and microcircuit factors but mechanisms are unknown." "Here, we exploit evolutionary algorithms to address the effect of single-cell heterogeneity on CA1 pyramidal cell activity (6). First, we developed a biophysically realistic model of CA1 pyramidal cells using the Hodgkin-Huxley multi-compartment formalism in the Neuron+Python platform and the morphological database Neuromorpho.org. We adopted genetic algorithms (GA) to identify passive, active and synaptic conductances resulting in realistic electrophysiological behavior. We then used the generated models to explore the functional effect of intrinsic, synaptic and morphological heterogeneity during oscillatory activities. By combining results from all simulations in a logistic regression model we evaluated the effect of up/down-regulation of different factors. We found that muyltidimensional excitatory and inhibitory inputs interact with morphological and intrinsic factors to determine a low dimensional subset of output features (e.g. phase-locking preference) that matches non-fitted experimental data.
ClearFinder: A Python GUI for annotating cells in cleared mouse brain
FENS Forum 2024
A fully flexible, open-source, Python-based rodent behavior platform
FENS Forum 2024
Interactive brain atlas curation and enhancement with Houdini and Python
FENS Forum 2024
Power Pixels: A Python-based pipeline for processing of Neuropixels recordings
FENS Forum 2024
SpyDen: An open-source Python toolbox for automated molecular analysis in dendrites and spines
FENS Forum 2024
Streamlining electrophysiology data analysis: A Python-based workflow for efficient integration and processing
FENS Forum 2024