LED
Latest
Introduction to protocols.io: Scientific collaboration through open protocols
Research articles and laboratory protocol organization often lack detailed instructions for replicating experiments. protocols.io is an open-access platform where researchers collaboratively create dynamic, interactive, step-by-step protocols that can be executed on mobile devices or the web. Researchers can easily and efficiently share protocols with colleagues, collaborators, the scientific community, or make them public. Real-time communication and interaction keep protocols up to date. Public protocols receive a DOI and enable open communication with authors and researchers to foster efficient experimentation and reproducibility.
Introduction to protocols.io: Scientific collaboration through open protocols
Research articles and laboratory protocol organization often lack detailed instructions for replicating experiments. protocols.io is an open-access platform where researchers collaboratively create dynamic, interactive, step-by-step protocols that can be executed on mobile devices or the web. Researchers can easily and efficiently share protocols with colleagues, collaborators, the scientific community, or make them public. Real-time communication and interaction keep protocols up to date. Public protocols receive a DOI and enable open communication with authors and researchers to foster efficient experimentation and reproducibility.
The SIMple microscope: Development of a fibre-based platform for accessible SIM imaging in unconventional environments
Advancements in imaging speed, depth and resolution have made structured illumination microscopy (SIM) an increasingly powerful optical sectioning (OS) and super-resolution (SR) technique, but these developments remain inaccessible to many life science researchers due to the cost, optical complexity and delicacy of these instruments. We address these limitations by redesigning the optical path using in-line fibre components that are compact, lightweight and easily assembled in a “Plug & Play” modality, without compromising imaging performance. They can be integrated into an existing widefield microscope with a minimum of optical components and alignment, making OS-SIM more accessible to researchers with less optics experience. We also demonstrate a complete SR-SIM imaging system with dimensions 300 mm × 300 mm × 450 mm. We propose to enable accessible SIM imaging by utilising its compact, lightweight and robust design to transport it where it is needed, and image in “unconventional” environments where factors such as temperature and biosafety considerations currently limit imaging experiments.
Open SPM: A Modular Framework for Scanning Probe Microscopy
OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.
Toward globally accessible neuroimaging: Building the OSI2ONE MRI Scanner in Paraguay
The Open Source Imaging Initiative has recently released a fully open source low field MRI scanner called the OSI2ONE. We are currently building this system at the Universidad Paraguayo Alemana in Asuncion, Paraguay for a neuroimaging project at a clinic in Bolivia. I will discuss the process of construction, important considerations before you build, and future work planned with this device.
OpenSFDI: an open hardware project for label-free measurements of tissue optical properties with spatial frequency domain imaging
Spatial frequency domain imaging (SFDI) is a diffuse optical measurement technique that can quantify tissue optical absorption and reduced scattering on a pixel by-pixel basis. Measurements of absorption at different wavelengths enable the extraction of molar concentrations of tissue chromophores over a wide field, providing a noncontact and label-free means to assess tissue viability, oxygenation, microarchitecture, and molecular content. In this talk, I will describe openSFDI, an open-source guide for building a low-cost, small-footprint, multi-wavelength SFDI system capable of quantifying absorption and reduced scattering as well as oxyhemoglobin and deoxyhemoglobin concentrations in biological tissue. The openSFDI project has a companion website which provides a complete parts list along with detailed instructions for assembling the openSFDI system. I will also review several technological advances our lab has recently made, including the extension of SFDI to the shortwave infrared wavelength band (900-1300 nm), where water and lipids provide strong contrast. Finally, I will discuss several preclinical and clinical applications for SFDI, including applications related to cancer, dermatology, rheumatology, cardiovascular disease, and others.
Computational Imaging: Augmenting Optics with Algorithms for Biomedical Microscopy and Neural Imaging
Computational imaging seeks to achieve novel capabilities and overcome conventional limitations by combining optics and algorithms. In this seminar, I will discuss two computational imaging technologies developed in Boston University Computational Imaging Systems lab, including Intensity Diffraction Tomography and Computational Miniature Mesoscope. In our intensity diffraction tomography system, we demonstrate 3D quantitative phase imaging on a simple LED array microscope. We develop both single-scattering and multiple-scattering models to image complex biological samples. In our Computational Miniature Mesoscope, we demonstrate single-shot 3D high-resolution fluorescence imaging across a wide field-of-view in a miniaturized platform. We develop methods to characterize 3D spatially varying aberrations and physical simulator-based deep learning strategies to achieve fast and accurate reconstructions. Broadly, I will discuss how synergies between novel optical instrumentation, physical modeling, and model- and learning-based computational algorithms can push the limits in biomedical microscopy and neural imaging.
A Flexible Platform for Monitoring Cerebellum-Dependent Sensory Associative Learning
Climbing fiber inputs to Purkinje cells provide instructive signals critical for cerebellum-dependent associative learning. Studying these signals in head-fixed mice facilitates the use of imaging, electrophysiological, and optogenetic methods. Here, a low-cost behavioral platform (~$1000) was developed that allows tracking of associative learning in head-fixed mice that locomote freely on a running wheel. The platform incorporates two common associative learning paradigms: eyeblink conditioning and delayed tactile startle conditioning. Behavior is tracked using a camera and the wheel movement by a detector. We describe the components and setup and provide a detailed protocol for training and data analysis. This platform allows the incorporation of optogenetic stimulation and fluorescence imaging. The design allows a single host computer to control multiple platforms for training multiple animals simultaneously.
PiSpy: An Affordable, Accessible, and Flexible Imaging Platform for the Automated Observation of Organismal Biology and Behavior
A great deal of understanding can be gleaned from direct observation of organismal growth, development, and behavior. However, direct observation can be time consuming and influence the organism through unintentional stimuli. Additionally, video capturing equipment can often be prohibitively expensive, difficult to modify to one’s specific needs, and may come with unnecessary features. Here, we describe the PiSpy, a low-cost, automated video acquisition platform that uses a Raspberry Pi computer and camera to record video or images at specified time intervals or when externally triggered. All settings and controls, such as programmable light cycling, are accessible to users with no programming experience through an easy-to-use graphical user interface. Importantly, the entire PiSpy system can be assembled for less than $100 using laser-cut and 3D-printed components. We demonstrate the broad applications and flexibility of the PiSpy across a range of model and non-model organisms. Designs, instructions, and code can be accessed through an online repository, where a global community of PiSpy users can also contribute their own unique customizations and help grow the community of open-source research solutions.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Building a Simple and Versatile Illumination System for Optogenetic Experiments
Controlling biological processes using light has increased the accuracy and speed with which researchers can manipulate many biological processes. Optical control allows for an unprecedented ability to dissect function and holds the potential for enabling novel genetic therapies. However, optogenetic experiments require adequate light sources with spatial, temporal, or intensity control, often a bottleneck for researchers. Here we detail how to build a low-cost and versatile LED illumination system that is easily customizable for different available optogenetic tools. This system is configurable for manual or computer control with adjustable LED intensity. We provide an illustrated step-by-step guide for building the circuit, making it computer-controlled, and constructing the LEDs. To facilitate the assembly of this device, we also discuss some basic soldering techniques and explain the circuitry used to control the LEDs. Using our open-source user interface, users can automate precise timing and pulsing of light on a personal computer (PC) or an inexpensive tablet. This automation makes the system useful for experiments that use LEDs to control genes, signaling pathways, and other cellular activities that span large time scales. For this protocol, no prior expertise in electronics is required to build all the parts needed or to use the illumination system to perform optogenetic experiments.
GuPPy, a Python toolbox for the analysis of fiber photometry data
Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.
ReproNim: Towards a culture of more reproducible neuroimaging research
Given the intrinsically large and complex data sets collected in neuroimaging research, coupled with the extensive array of shared data and tools amassed in the research community, ReproNim seeks to lower the barriers for efficient: use of data; description of data and process; use of standards and best practices; sharing; and subsequent reuse of the collective ‘big’ data. Aggregation of data and reuse of analytic methods have become critical in addressing concerns about the replicability and power of many of today’s neuroimaging studies.
Get more from your ISH brain slices with Stalefish
The standard method for staining structures in the brain is to slice the brain into 2D sections. Each slice is treated using a technique such as in-situ hybridization to examine the spatial expression of a particular molecule at a given developmental timepoint. Depending on the brain structures being studied, slices can be made coronally, sagitally, or at any angle that is thought to be optimal for analysis. However, assimilating the information presented in the 2D slice images to gain quantitiative and informative 3D expression patterns is challenging. Even if expression levels are presented as voxels, to give 3D expression clouds, it can be difficult to compare expression across individuals and analysing such data requires significant expertise and imagination. In this talk, I will describe a new approach to examining histology slices, in which the user defines the brain structure of interest by drawing curves around it on each slice in a set and the depth of tissue from which to sample expression. The sampled 'curves' are then assembled into a 3D surface, which can then be transformed onto a common reference frame for comparative analysis. I will show how other neuroscientists can obtain and use the tool, which is called Stalefish, to analyse their own image data with no (or minimal) changes to their slice preparation workflow.
Autopilot v0.4.0 - Distributing development of a distributed experimental framework
Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.
Open-source tools for systems neuroscience
Open-source tools are gaining an increasing foothold in neuroscience. The rising complexity of experiments in systems neuroscience has led to a need for multiple parts of experiments to work together seamlessly. This means that open-source tools that freely interact with each other and can be understood and modified more easily allow scientists to conduct better experiments with less effort than closed tools. Open Ephys is an organization with team members distributed all around the world. Our mission is to advance our understanding of the brain by promoting community ownership of the tools we use to study it. We are making and distributing cutting edge tools that exploit modern technology to bring down the price and complexity of neuroscience experiments. A large component of this is to take tools that were developed in academic labs and helping with documentation, support, and distribution. More recently, we have been working on bringing high-quality manufacturing, distribution, warranty, and support to open source tools by partnering with OEPS in Portugal. We are now also establishing standards that make it possible to combine methods, such as miniaturized microscopes, electrode drive implants, and silicon probes seamlessly in one system. In the longer term, our development of new tools, interfaces and our standardization efforts have the goal of making it possible for scientists to easily run complex experiments that span from complex behaviors and tasks, multiple recording modalities, to easy access to data processing pipelines.
LED coverage
16 items