Hardware
hardware
Latest
Open Hardware Microfluidics
What’s the point of having scientific and technological innovations when only a few can benefit from them? How can we make science more inclusive? Those questions are always in the back of my mind when we perform research in our laboratory, and we have a strong focus on the scientific accessibility of our developed methods from microfabrication to sensor development.
Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms
Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.
A Breakdown of the Global Open Science Hardware (GOSH) Movement
This seminar, hosted by the LIBRE hub project, will provide an in-depth introduction to the Global Open Science Hardware (GOSH) movement. Since its inception, GOSH has been instrumental in advancing open-source hardware within scientific research, fostering a diverse and active community. The seminar will cover the history of GOSH, its current initiatives, and future opportunities, with a particular focus on the contributions and activities of the Latin American branch. This session aims to inform researchers, educators, and policy-makers about the significance and impact of GOSH in promoting accessibility and collaboration in science instrumentation.
Open source FPGA tools for building research devices
Edmund will present why to use FPGAs when building scientific instruments, when and why to use open source FPGA tools, the history of their development, their development status, currently supported FPGA families and functions, current developments in design languages and tools, the community, freely available design blocks, and possible future developments.
OpenSFDI: an open hardware project for label-free measurements of tissue optical properties with spatial frequency domain imaging
Spatial frequency domain imaging (SFDI) is a diffuse optical measurement technique that can quantify tissue optical absorption and reduced scattering on a pixel by-pixel basis. Measurements of absorption at different wavelengths enable the extraction of molar concentrations of tissue chromophores over a wide field, providing a noncontact and label-free means to assess tissue viability, oxygenation, microarchitecture, and molecular content. In this talk, I will describe openSFDI, an open-source guide for building a low-cost, small-footprint, multi-wavelength SFDI system capable of quantifying absorption and reduced scattering as well as oxyhemoglobin and deoxyhemoglobin concentrations in biological tissue. The openSFDI project has a companion website which provides a complete parts list along with detailed instructions for assembling the openSFDI system. I will also review several technological advances our lab has recently made, including the extension of SFDI to the shortwave infrared wavelength band (900-1300 nm), where water and lipids provide strong contrast. Finally, I will discuss several preclinical and clinical applications for SFDI, including applications related to cancer, dermatology, rheumatology, cardiovascular disease, and others.
A Flexible Platform for Monitoring Cerebellum-Dependent Sensory Associative Learning
Climbing fiber inputs to Purkinje cells provide instructive signals critical for cerebellum-dependent associative learning. Studying these signals in head-fixed mice facilitates the use of imaging, electrophysiological, and optogenetic methods. Here, a low-cost behavioral platform (~$1000) was developed that allows tracking of associative learning in head-fixed mice that locomote freely on a running wheel. The platform incorporates two common associative learning paradigms: eyeblink conditioning and delayed tactile startle conditioning. Behavior is tracked using a camera and the wheel movement by a detector. We describe the components and setup and provide a detailed protocol for training and data analysis. This platform allows the incorporation of optogenetic stimulation and fluorescence imaging. The design allows a single host computer to control multiple platforms for training multiple animals simultaneously.
Measuring the Motions of Mice: Open source tracking with the KineMouse Wheel
Who says you can't reinvent the wheel?! This running wheel for head-fixed mice allows 3D reconstruction of body kinematics using a single camera and DeepLabCut (or similar) software. A lightweight, transparent polycarbonate floor and a mirror mounted on the inside allow two views to be captured simultaneously. All parts are commercially available or laser cut
Open-source neurotechnologies for imaging cortex-wide neural activity in behaving animals
Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (+300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Building a Simple and Versatile Illumination System for Optogenetic Experiments
Controlling biological processes using light has increased the accuracy and speed with which researchers can manipulate many biological processes. Optical control allows for an unprecedented ability to dissect function and holds the potential for enabling novel genetic therapies. However, optogenetic experiments require adequate light sources with spatial, temporal, or intensity control, often a bottleneck for researchers. Here we detail how to build a low-cost and versatile LED illumination system that is easily customizable for different available optogenetic tools. This system is configurable for manual or computer control with adjustable LED intensity. We provide an illustrated step-by-step guide for building the circuit, making it computer-controlled, and constructing the LEDs. To facilitate the assembly of this device, we also discuss some basic soldering techniques and explain the circuitry used to control the LEDs. Using our open-source user interface, users can automate precise timing and pulsing of light on a personal computer (PC) or an inexpensive tablet. This automation makes the system useful for experiments that use LEDs to control genes, signaling pathways, and other cellular activities that span large time scales. For this protocol, no prior expertise in electronics is required to build all the parts needed or to use the illumination system to perform optogenetic experiments.
The Open-Source UCLA Miniscope Project
The Miniscope Project -- an open-source collaborative effort—was created to accelerate innovation of miniature microscope technology and to increase global access to this technology. Currently, we are working on advancements ranging from optogenetic stimulation and wire-free operation to simultaneous optical and electrophysiological recording. Using these systems, we have uncovered mechanisms underlying temporal memory linking and investigated causes of cognitive deficits in temporal lobe epilepsy. Through innovation and optimization, this work aims to extend the reach of neuroscience research and create new avenues of scientific inquiry.
Autopilot v0.4.0 - Distributing development of a distributed experimental framework
Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.
Creating and controlling visual environments using BonVision
Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices.
OpenFlexure
OpenFlexure is a 3D printed flexure translation stage, developed by a group at the Bath University. The stage is capable of sub-micron-scale motion, with very small drift over time. Which makes it quite good, among other things, for time-lapse protocols that need to be done over days/weeks time, and under space restricted areas, such as fume hoods.
Open-source tools for systems neuroscience
Open-source tools are gaining an increasing foothold in neuroscience. The rising complexity of experiments in systems neuroscience has led to a need for multiple parts of experiments to work together seamlessly. This means that open-source tools that freely interact with each other and can be understood and modified more easily allow scientists to conduct better experiments with less effort than closed tools. Open Ephys is an organization with team members distributed all around the world. Our mission is to advance our understanding of the brain by promoting community ownership of the tools we use to study it. We are making and distributing cutting edge tools that exploit modern technology to bring down the price and complexity of neuroscience experiments. A large component of this is to take tools that were developed in academic labs and helping with documentation, support, and distribution. More recently, we have been working on bringing high-quality manufacturing, distribution, warranty, and support to open source tools by partnering with OEPS in Portugal. We are now also establishing standards that make it possible to combine methods, such as miniaturized microscopes, electrode drive implants, and silicon probes seamlessly in one system. In the longer term, our development of new tools, interfaces and our standardization efforts have the goal of making it possible for scientists to easily run complex experiments that span from complex behaviors and tasks, multiple recording modalities, to easy access to data processing pipelines.
Feeding Exprementation Device ver3 (FED3)
FED3 is a device for behavioral training of mice in vivarium home-cages. Mice interact with FED3 through two nose-pokes and FED3 responds with visual stimuli, auditory stimuli, and by dispensing pellets. As it is used in the home-cage FED3 can be used for around-the-clock training of mice over several weeks. FED3 is open-source and can be built by users for ~10-20x less than commercial solutions for training mice. The control code is also open-source and was designed to be easily modified by users.
An open-source experimental framework for automation of cell biology experiments
Modern biological methods often require a large number of experiments to be conducted. For example, dissecting molecular pathways involved in a variety of biological processes in neurons and non-excitable cells requires high-throughput compound library or RNAi screens. Another example requiring large datasets - modern data analysis methods such as deep learning. These have been successfully applied to a number of biological and medical questions. In this talk we will describe an open-source platform allowing such experiments to be automated. The platform consists of an XY stage, perfusion system and an epifluorescent microscope with autofocusing. It is extremely easy to build and can be used for different experimental paradigms, ranging from immunolabeling and routine characterisation of large numbers of cell lines to high-throughput imaging of fluorescent reporters.
hardware coverage
17 items