open source
Latest
Padraig Gleeson
The successful applicant will contribute to the goals of the OpenWorm project, to create a cell by cell model of the nematode C. elegans incorporating its full neuronal network and a 3D body/environment simulation. The role will involve contributing to existing and creating new software packages which facilitate the goals of the OpenWorm project. It will also involve carrying out research into the physiology, anatomy and behaviour of C. elegans, to ensure the simulations are biologically realistic. Code will be open source from the start and active interaction with the community of researchers in this area will be required.
Spike train structure of cortical transcriptomic populations in vivo
The cortex comprises many neuronal types, which can be distinguished by their transcriptomes: the sets of genes they express. Little is known about the in vivo activity of these cell types, particularly as regards the structure of their spike trains, which might provide clues to cortical circuit function. To address this question, we used Neuropixels electrodes to record layer 5 excitatory populations in mouse V1, then transcriptomically identified the recorded cell types. To do so, we performed a subsequent recording of the same cells using 2-photon (2p) calcium imaging, identifying neurons between the two recording modalities by fingerprinting their responses to a “zebra noise” stimulus and estimating the path of the electrode through the 2p stack with a probabilistic method. We then cut brain slices and performed in situ transcriptomics to localize ~300 genes using coppaFISH3d, a new open source method, and aligned the transcriptomic data to the 2p stack. Analysis of the data is ongoing, and suggests substantial differences in spike time coordination between ET and IT neurons, as well as between transcriptomic subtypes of both these excitatory types.
Manipulating single-unit theta phase-locking with PhaSER: An open-source tool for real-time phase estimation and manipulation
Toward an open science ecosystem for neuroimaging
It is now widely accepted that openness and transparency are keys to improving the reproducibility of scientific research, but many challenges remain to adoption of these practices. I will discuss the growth of an ecosystem for open science within the field of neuroimaging, focusing on platforms for open data sharing and open source tools for reproducible data analysis. I will also discuss the role of the Brain Imaging Data Structure (BIDS), a community standard for data organization, in enabling this open science ecosystem, and will outline the scientific impacts of these resources.
Multiscale modeling of brain states, from spiking networks to the whole brain
Modeling brain mechanisms is often confined to a given scale, such as single-cell models, network models or whole-brain models, and it is often difficult to relate these models. Here, we show an approach to build models across scales, starting from the level of circuits to the whole brain. The key is the design of accurate population models derived from biophysical models of networks of excitatory and inhibitory neurons, using mean-field techniques. Such population models can be later integrated as units in large-scale networks defining entire brain areas or the whole brain. We illustrate this approach by the simulation of asynchronous and slow-wave states, from circuits to the whole brain. At the mesoscale (millimeters), these models account for travelling activity waves in cortex, and at the macroscale (centimeters), the models reproduce the synchrony of slow waves and their responsiveness to external stimuli. This approach can also be used to evaluate the impact of sub-cellular parameters, such as receptor types or membrane conductances, on the emergent behavior at the whole-brain level. This is illustrated with simulations of the effect of anesthetics. The program codes are open source and run in open-access platforms (such as EBRAINS).
NMC4 Short Talk: Rank similarity filters for computationally-efficient machine learning on high dimensional data
Real world datasets commonly contain nonlinearly separable classes, requiring nonlinear classifiers. However, these classifiers are less computationally efficient than their linear counterparts. This inefficiency wastes energy, resources and time. We were inspired by the efficiency of the brain to create a novel type of computationally efficient Artificial Neural Network (ANN) called Rank Similarity Filters. They can be used to both transform and classify nonlinearly separable datasets with many datapoints and dimensions. The weights of the filters are set using the rank orders of features in a datapoint, or optionally the 'confusion' adjusted ranks between features (determined from their distributions in the dataset). The activation strength of a filter determines its similarity to other points in the dataset, a measure based on cosine similarity. The activation of many Rank Similarity Filters transforms samples into a new nonlinear space suitable for linear classification (Rank Similarity Transform (RST)). We additionally used this method to create the nonlinear Rank Similarity Classifier (RSC), which is a fast and accurate multiclass classifier, and the nonlinear Rank Similarity Probabilistic Classifier (RSPC), which is an extension to the multilabel case. We evaluated the classifiers on multiple datasets and RSC is competitive with existing classifiers but with superior computational efficiency. Code for RST, RSC and RSPC is open source and was written in Python using the popular scikit-learn framework to make it easily accessible (https://github.com/KatharineShapcott/rank-similarity). In future extensions the algorithm can be applied to hardware suitable for the parallelization of an ANN (GPU) and a Spiking Neural Network (neuromorphic computing) with corresponding performance gains. This makes Rank Similarity Filters a promising biologically inspired solution to the problem of efficient analysis of nonlinearly separable data.
Data-driven reduction of dendritic morphologies with preserved dendro-somatic responses
There is little consensus on the level of spatial complexity at which dendrites operate. On the one hand, emergent evidence indicates that synapses cluster at micrometer spatial scales. On the other hand, most modelling and network studies ignore dendrites altogether. This dichotomy raises an urgent question: what is the smallest relevant spatial scale for understanding dendritic computation? We have developed a method to construct compartmental models at any level of spatial complexity. Through carefully chosen parameter fits, solvable in the least-squares sense, we obtain accurate reduced compartmental models. Thus, we are able to systematically construct passive as well as active dendrite models at varying degrees of spatial complexity. We evaluate which elements of the dendritic computational repertoire are captured by these models. We show that many canonical elements of the dendritic computational repertoire can be reproduced with few compartments. For instance, for a model to behave as a two-layer network, it is sufficient to fit a reduced model at the soma and at locations at the dendritic tips. In the basal dendrites of an L2/3 pyramidal model, we reproduce the backpropagation of somatic action potentials (APs) with a single dendritic compartment at the tip. Further, we obtain the well-known Ca-spike coincidence detection mechanism in L5 Pyramidal cells with as few as eleven compartments, the requirement being that their spacing along the apical trunk supports AP backpropagation. We also investigate whether afferent spatial connectivity motifs admit simplification by ablating targeted branches and grouping affected synapses onto the next proximal dendrite. We find that voltage in the remaining branches is reproduced if temporal conductance fluctuations stay below a limit that depends on the average difference in input resistance between the ablated branches and the next proximal dendrite. Consequently, when the average conductance load on distal synapses is constant, the dendritic tree can be simplified while appropriately decreasing synaptic weights. When the conductance level fluctuates strongly, for instance through a-priori unpredictable fluctuations in NMDA activation, a constant weight rescale factor cannot be found, and the dendrite cannot be simplified. We have created an open source Python toolbox (NEAT - https://neatdend.readthedocs.io/en/latest/) that automatises the simplification process. A NEST implementation of the reduced models, currently under construction, will enable the simulation of few-compartment models in large-scale networks, thus bridging the gap between cellular and network level neuroscience.
A discussion on the necessity for Open Source Hardware in neuroscience research
Research tools are paramount for scientific development, they enable researchers to observe and manipulate natural phenomena, learn their principles, make predictions and develop new technologies, treatments and improve living standards. Due to their costs and the geographical distribution of manufacturing companies access to them is not widely available, hindering the pace of research, the ability of many communities to contribute to science and education and reap its benefits. One possible solution for this issue is to create research tools under the open source ethos, where all documentation about them (including their designs, building and operating instructions) are made freely available. Dubbed Open Science Hardware (OSH), this production method follows the established and successful principles of open source software and brings many advantages over traditional creation methods such as: economic savings (see Pearce 2020 for potential economic savings in developing open source research tools), distributed manufacturing, repairability, and higher customizability. This development method has been greatly facilitated by recent technological developments in fast prototyping tools, Internet infrastructure, documentation platforms and lower costs of electronic off-the-shelf components. Taken together these benefits have the potential to make research more inclusive, equitable, distributed and most importantly, more reliable and reproducible, as - 1) researchers can know their tools inner workings in minute detail - 2) they can calibrate their tools before every experiment and having them running in optimal condition everytime - 3) given their lower price point, a)students can be trained/taught with hands on classes, b) several copies of the same instrument can be built leading to a parallelization of data collection and the creation of more robust datasets. - 4) Labs across the world can share the exact same type of instruments and create collaborative projects with standardized data collection and sharing.
Open Neuroscience: Challenging scientific barriers with Open Source & Open Science tools
The Open Science movement advocates for more transparent, equitable and reliable science. It focusses on improving existing infrastructures and spans all aspects of the scientific process, from implementing systems that reward pre-registering studies and guarantee their publication, all the way to making research data citable and freely available. In this context, open source tools (and the development ethos supporting them) are becoming more and more present in academic labs, as researchers are realizing that they can improve the quality of their work, while cutting costs. In this talk an overview of OS tools for neuroscience will be given, with a focus on software and hardware, and how their use can bring scientific independence and make research evolve faster.
open source coverage
9 items