Latest

SeminarOpen Source

Computational bio-imaging via inverse scattering

Shwetadwip Chowdhury
Assistant Professor, University of Texas at Austin
Nov 25, 2025

Optical imaging is a major research tool in the basic sciences, and is the only imaging modality that routinely enables non-ionized imaging with subcellular spatial resolutions and high imaging speeds. In biological imaging applications, however, optical imaging is limited by tissue scattering to short imaging depths. This prevents large-scale bio-imaging by allowing visualization of only the outer superficial layers of an organism, or specific components isolated from within the organism and prepared in-vitro.

SeminarOpen Source

Scaling Up Bioimaging with Microfluidic Chips

Tobias Wenzel
Institute for Biological and Medical Engineering (IIBM), Pontificia Universidad Católica de Chile.
Sep 5, 2025

Explore how microfluidic chips can enhance your imaging experiments by increasing control, throughput, or flexibility. In this remote, personalized workshop, participants will receive expert guidance, support and chips to run tests on their own microscopes.

SeminarOpen Source

The SIMple microscope: Development of a fibre-based platform for accessible SIM imaging in unconventional environments

Rebecca McClelland
PhD student at the University of Cambridge, United Kingdom.
Aug 26, 2025

Advancements in imaging speed, depth and resolution have made structured illumination microscopy (SIM) an increasingly powerful optical sectioning (OS) and super-resolution (SR) technique, but these developments remain inaccessible to many life science researchers due to the cost, optical complexity and delicacy of these instruments. We address these limitations by redesigning the optical path using in-line fibre components that are compact, lightweight and easily assembled in a “Plug & Play” modality, without compromising imaging performance. They can be integrated into an existing widefield microscope with a minimum of optical components and alignment, making OS-SIM more accessible to researchers with less optics experience. We also demonstrate a complete SR-SIM imaging system with dimensions 300 mm × 300 mm × 450 mm. We propose to enable accessible SIM imaging by utilising its compact, lightweight and robust design to transport it where it is needed, and image in “unconventional” environments where factors such as temperature and biosafety considerations currently limit imaging experiments.

SeminarOpen Source

“A Focus on 3D Printed Lenses: Rapid prototyping, low-cost microscopy and enhanced imaging for the life sciences”

Liam Rooney
University of Glasgow
May 22, 2025

High-quality glass lenses are commonplace in the design of optical instrumentation used across the biosciences. However, research-grade glass lenses are often costly, delicate and, depending on the prescription, can involve intricate and lengthy manufacturing - even more so in bioimaging applications. This seminar will outline 3D printing as a viable low-cost alternative for the manufacture of high-performance optical elements, where I will also discuss the creation of the world’s first fully 3D printed microscope and other implementations of 3D printed lenses. Our 3D printed lenses were generated using consumer-grade 3D printers and pose a 225x materials cost-saving compared to glass optics. Moreover, they can be produced in any lab or home environment and offer great potential for education and outreach. Following performance validation, our 3D printed optics were implemented in the production of a fully 3D printed microscope and demonstrated in histological imaging applications. We also applied low-cost fabrication methods to exotic lens geometries to enhance resolution and contrast across spatial scales and reveal new biological structures. Across these applications, our findings showed that 3D printed lenses are a viable substitute for commercial glass lenses, with the advantage of being relatively low-cost, accessible, and suitable for use in optical instruments. Combining 3D printed lenses with open-source 3D printed microscope chassis designs opens the doors for low-cost applications for rapid prototyping, low-resource field diagnostics, and the creation of cheap educational tools.

SeminarOpen SourceRecording

Towards open meta-research in neuroimaging

Kendra Oudyk
ORIGAMI - Neural data science - https://neurodatascience.github.io/
Dec 9, 2024

When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/

SeminarOpen Source

“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”

Kevin Uning
London Centre for Nanotechnology (LCN), University College London (UCL)
Nov 29, 2024

Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.

SeminarOpen Source

Sometimes more is not better: The case of medical imaging

Marcelo Andia
Profesor Asosiado
Nov 20, 2024

En el diagnóstico médico por imágenes muchas veces los desarrollos técnicos se han concentrado en mejorar la calidad de las imágenes en términos de resolución espacial y/o temporal, lo cual muchas veces ha incrementado considerablemente los costos de estas prestaciones. Sin embargo, mejor resolución espacial y/o temporal de las imágenes médicas, no se traducen necesariamente en mejores diagnósticos o en diagnósticos más tempranos, y en algunos casos, nuevas capacidades diagnósticas no han demostrado un impacto en reducir la mortalidad asociada a las patologías. En esta presentación discutiremos como el impacto de las nuevas tecnologías en salud debe ser medido en términos del resultado clínico del paciente o la población afectada más que en parámetros asociados a la "calidad" de las imágenes.

SeminarOpen SourceRecording

Trackoscope: A low-cost, open, autonomous tracking microscope for long-term observations of microscale organisms

Priya Soneji
Georgia Institute of Technology
Oct 8, 2024

Cells and microorganisms are motile, yet the stationary nature of conventional microscopes impedes comprehensive, long-term behavioral and biomechanical analysis. The limitations are twofold: a narrow focus permits high-resolution imaging but sacrifices the broader context of organism behavior, while a wider focus compromises microscopic detail. This trade-off is especially problematic when investigating rapidly motile ciliates, which often have to be confined to small volumes between coverslips affecting their natural behavior. To address this challenge, we introduce Trackoscope, an 2-axis autonomous tracking microscope designed to follow swimming organisms ranging from 10μm to 2mm across a 325 square centimeter area for extended durations—ranging from hours to days—at high resolution. Utilizing Trackoscope, we captured a diverse array of behaviors, from the air-water swimming locomotion of Amoeba to bacterial hunting dynamics in Actinosphaerium, walking gait in Tardigrada, and binary fission in motile Blepharisma. Trackoscope is a cost-effective solution well-suited for diverse settings, from high school labs to resource-constrained research environments. Its capability to capture diverse behaviors in larger, more realistic ecosystems extends our understanding of the physics of living systems. The low-cost, open architecture democratizes scientific discovery, offering a dynamic window into the lives of previously inaccessible small aquatic organisms.

SeminarOpen Source

Toward globally accessible neuroimaging: Building the OSI2ONE MRI Scanner in Paraguay

Joshua Harper
Professor of Engineering
Jun 18, 2024

The Open Source Imaging Initiative has recently released a fully open source low field MRI scanner called the OSI2ONE. We are currently building this system at the Universidad Paraguayo Alemana in Asuncion, Paraguay for a neuroimaging project at a clinic in Bolivia. I will discuss the process of construction, important considerations before you build, and future work planned with this device.

SeminarOpen SourceRecording

OpenSFDI: an open hardware project for label-free measurements of tissue optical properties with spatial frequency domain imaging

Darren Roblyer
Boston University
Jun 28, 2023

Spatial frequency domain imaging (SFDI) is a diffuse optical measurement technique that can quantify tissue optical absorption and reduced scattering on a pixel by-pixel basis. Measurements of absorption at different wavelengths enable the extraction of molar concentrations of tissue chromophores over a wide field, providing a noncontact and label-free means to assess tissue viability, oxygenation, microarchitecture, and molecular content. In this talk, I will describe openSFDI, an open-source guide for building a low-cost, small-footprint, multi-wavelength SFDI system capable of quantifying absorption and reduced scattering as well as oxyhemoglobin and deoxyhemoglobin concentrations in biological tissue. The openSFDI project has a companion website which provides a complete parts list along with detailed instructions for assembling the openSFDI system. I will also review several technological advances our lab has recently made, including the extension of SFDI to the shortwave infrared wavelength band (900-1300 nm), where water and lipids provide strong contrast. Finally, I will discuss several preclinical and clinical applications for SFDI, including applications related to cancer, dermatology, rheumatology, cardiovascular disease, and others.

SeminarOpen SourceRecording

An open-source miniature two-photon microscope for large-scale calcium imaging in freely moving mice

Weijian Zong
Kavli Institute for Systems Neuroscience, Norwegian University of Science and Technology
Sep 12, 2022

Due to the unsuitability of benchtop imaging for tasks that require unrestrained movement, investigators have tried, for almost two decades, to develop miniature 2P microscopes-2P miniscopes–that can be carried on the head of freely moving animals. In this talk, I would first briefly review the development history of this technique, and then report our latest progress on developing the new generation of 2P miniscopes, MINI2P, that overcomes the limits of previous versions by both meeting requirements for fatigue-free exploratory behavior during extended recording periods and satisfying demands for further increasing the cell yield by an order of magnitude, to thousands of neurons. The performance and reliability of MINI2P are validated by recordings of spatially tuned neurons in three brain regions and in three behavioral assays. All information about MINI2P is open access, with instruction videos, code, and manuals on public repositories, and workshops will be organized to help new users getting started. MINI2P permits large-scale and high-resolution calcium imaging in freely-moving mice, and opens the door to investigating brain functions during unconstrained natural behaviors.

SeminarOpen SourceRecording

Computational Imaging: Augmenting Optics with Algorithms for Biomedical Microscopy and Neural Imaging

Lei Tian
Department of Electrical and Computer Engineering, Boston University
Aug 22, 2022

Computational imaging seeks to achieve novel capabilities and overcome conventional limitations by combining optics and algorithms. In this seminar, I will discuss two computational imaging technologies developed in Boston University Computational Imaging Systems lab, including Intensity Diffraction Tomography and Computational Miniature Mesoscope. In our intensity diffraction tomography system, we demonstrate 3D quantitative phase imaging on a simple LED array microscope. We develop both single-scattering and multiple-scattering models to image complex biological samples. In our Computational Miniature Mesoscope, we demonstrate single-shot 3D high-resolution fluorescence imaging across a wide field-of-view in a miniaturized platform. We develop methods to characterize 3D spatially varying aberrations and physical simulator-based deep learning strategies to achieve fast and accurate reconstructions. Broadly, I will discuss how synergies between novel optical instrumentation, physical modeling, and model- and learning-based computational algorithms can push the limits in biomedical microscopy and neural imaging.

SeminarOpen SourceRecording

A Flexible Platform for Monitoring Cerebellum-Dependent Sensory Associative Learning

Gerard Joey Broussard
Princeton Neuroscience Institute
Jun 1, 2022

Climbing fiber inputs to Purkinje cells provide instructive signals critical for cerebellum-dependent associative learning. Studying these signals in head-fixed mice facilitates the use of imaging, electrophysiological, and optogenetic methods. Here, a low-cost behavioral platform (~$1000) was developed that allows tracking of associative learning in head-fixed mice that locomote freely on a running wheel. The platform incorporates two common associative learning paradigms: eyeblink conditioning and delayed tactile startle conditioning. Behavior is tracked using a camera and the wheel movement by a detector. We describe the components and setup and provide a detailed protocol for training and data analysis. This platform allows the incorporation of optogenetic stimulation and fluorescence imaging. The design allows a single host computer to control multiple platforms for training multiple animals simultaneously.

SeminarOpen SourceRecording

Open-source neurotechnologies for imaging cortex-wide neural activity in behaving animals

Suhasa Kodandaramaiah
University of Minnesota
May 4, 2022

Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (+300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.

SeminarOpen SourceRecording

PiSpy: An Affordable, Accessible, and Flexible Imaging Platform for the Automated Observation of Organismal Biology and Behavior

Gregory Pask and Benjamin Morris
Middlebury College
Apr 20, 2022

A great deal of understanding can be gleaned from direct observation of organismal growth, development, and behavior. However, direct observation can be time consuming and influence the organism through unintentional stimuli. Additionally, video capturing equipment can often be prohibitively expensive, difficult to modify to one’s specific needs, and may come with unnecessary features. Here, we describe the PiSpy, a low-cost, automated video acquisition platform that uses a Raspberry Pi computer and camera to record video or images at specified time intervals or when externally triggered. All settings and controls, such as programmable light cycling, are accessible to users with no programming experience through an easy-to-use graphical user interface. Importantly, the entire PiSpy system can be assembled for less than $100 using laser-cut and 3D-printed components. We demonstrate the broad applications and flexibility of the PiSpy across a range of model and non-model organisms. Designs, instructions, and code can be accessed through an online repository, where a global community of PiSpy users can also contribute their own unique customizations and help grow the community of open-source research solutions.

SeminarOpen SourceRecording

Mesmerize: A blueprint for shareable and reproducible analysis of calcium imaging data

Kushal Kolar
University of North Carolina at Chapel Hill
Apr 6, 2022

Mesmerize is a platform for the annotation and analysis of neuronal calcium imaging data. Mesmerize encompasses the entire process of calcium imaging analysis from raw data to interactive visualizations. Mesmerize allows you to create FAIR-functionally linked datasets that are easy to share. The analysis tools are applicable for a broad range of biological experiments and come with GUI interfaces that can be used without requiring a programming background.

SeminarOpen SourceRecording

CaImAn: large-scale batch and online analysis of calcium imaging data

Andrea Giovannucci
University of North Carolina at Chapel Hill
Dec 8, 2021

Advances in fluorescence microscopy enable monitoring larger brain areas in-vivo with finer time resolution. The resulting data rates require reproducible analysis pipelines that are reliable, fully automated, and scalable to datasets generated over the course of months. We present CaImAn, an open-source library for calcium imaging data analysis. CaImAn provides automatic and scalable methods to address problems common to pre-processing, including motion correction, neural activity identification, and registration across different sessions of data collection. It does this while requiring minimal user intervention, with good scalability on computers ranging from laptops to high-performance computing clusters. CaImAn is suitable for two-photon and one-photon imaging, and also enables real-time analysis on streaming data. To benchmark the performance of CaImAn we collected and combined a corpus of manual annotations from multiple labelers on nine mouse two-photon datasets. We demonstrate that CaImAn achieves near-human performance in detecting locations of active neurons.

SeminarOpen SourceRecording

ReproNim: Towards a culture of more reproducible neuroimaging research

David N. Kennedy, PhD
University of Massachusetts Medical School
Nov 10, 2021

Given the intrinsically large and complex data sets collected in neuroimaging research, coupled with the extensive array of shared data and tools amassed in the research community, ReproNim seeks to lower the barriers for efficient: use of data; description of data and process; use of standards and best practices; sharing; and subsequent reuse of the collective ‘big’ data. Aggregation of data and reuse of analytic methods have become critical in addressing concerns about the replicability and power of many of today’s neuroimaging studies.

SeminarOpen SourceRecording

The Open-Source UCLA Miniscope Project

Daniel Aharoni
University of California, Los Angeles
Oct 27, 2021

The Miniscope Project -- an open-source collaborative effort—was created to accelerate innovation of miniature microscope technology and to increase global access to this technology. Currently, we are working on advancements ranging from optogenetic stimulation and wire-free operation to simultaneous optical and electrophysiological recording. Using these systems, we have uncovered mechanisms underlying temporal memory linking and investigated causes of cognitive deficits in temporal lobe epilepsy. Through innovation and optimization, this work aims to extend the reach of neuroscience research and create new avenues of scientific inquiry.

SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 27, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.

SeminarOpen SourceRecording

Suite2p: a multipurpose functional segmentation pipeline for cellular imaging

Carsen Stringer
HHMI Janelia Research Campus
May 21, 2021

The combination of two-photon microscopy recordings and powerful calcium-dependent fluorescent sensors enables simultaneous recording of unprecedentedly large populations of neurons. While these sensors have matured over several generations of development, computational methods to process their fluorescence are often inefficient and the results hard to interpret. Here we introduce Suite2p: a fast, accurate, parameter-free and complete pipeline that registers raw movies, detects active and/or inactive cells (using Cellpose), extracts their calcium traces and infers their spike times. Suite2p runs faster than real time on standard workstations and outperforms state-of-the-art methods on newly developed ground-truth benchmarks for motion correction and cell detection.

SeminarOpen SourceRecording

A macaque connectome for simulating large-scale network dynamics in The VirtualBrain

Kelly Shen
University of Toronto
Apr 30, 2021

TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.

imaging coverage

25 items

Seminar25
Domain spotlight

Explore how imaging research is advancing inside Open Source.

Visit domain