Latest

SeminarOpen Source

Open SPM: A Modular Framework for Scanning Probe Microscopy

Marcos Penedo Garcia
Senior scientist, LBNI-IBI, EPFL Lausanne, Switzerland
Jun 24, 2025

OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.

SeminarOpen SourceRecording

Development of an open-source femtosecond fiber laser system for multiphoton microscopy

Bryan Spring
Northeastern University
Apr 19, 2023

This talk will present a low-cost protocol for fabricating an easily constructed femtosecond (fs) fiber laser system suitable for routine multiphoton microscopy (1060–1080 nm, 1 W average power, 70 fs pulse duration, 30–70 MHz repetition rate). Concepts well-known in the laser physics community essential to proper laser operation, but generally obscure to biophysicists and biomedical engineers, will be clarified. The parts list (~$13K US dollars), the equipment list (~$40K+), and the intellectual investment needed to build the laser will be described. A goal of the presentation will be to engage with the audience to discuss trade-offs associated with a custom-built fs fiber laser versus purchasing a commercial system. I will also touch on my research group’s plans to further develop this custom laser system for multiplexed cancer imaging as well as recent developments in the field that promise even higher performance fs fiber lasers for approximately the same cost and ease of construction.

SeminarOpen SourceRecording

ReproNim: Towards a culture of more reproducible neuroimaging research

David N. Kennedy, PhD
University of Massachusetts Medical School
Nov 10, 2021

Given the intrinsically large and complex data sets collected in neuroimaging research, coupled with the extensive array of shared data and tools amassed in the research community, ReproNim seeks to lower the barriers for efficient: use of data; description of data and process; use of standards and best practices; sharing; and subsequent reuse of the collective ‘big’ data. Aggregation of data and reuse of analytic methods have become critical in addressing concerns about the replicability and power of many of today’s neuroimaging studies.

SeminarOpen SourceRecording

Autopilot v0.4.0 - Distributing development of a distributed experimental framework

Jonny Saunders
University of Oregon
Sep 29, 2021

Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.

SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 27, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.

SeminarOpen SourceRecording

Open-source tools for systems neuroscience

Jakob Voigts
MIT and Open Ephys
Jun 25, 2021

Open-source tools are gaining an increasing foothold in neuroscience. The rising complexity of experiments in systems neuroscience has led to a need for multiple parts of experiments to work together seamlessly. This means that open-source tools that freely interact with each other and can be understood and modified more easily allow scientists to conduct better experiments with less effort than closed tools. Open Ephys is an organization with team members distributed all around the world. Our mission is to advance our understanding of the brain by promoting community ownership of the tools we use to study it. We are making and distributing cutting edge tools that exploit modern technology to bring down the price and complexity of neuroscience experiments. A large component of this is to take tools that were developed in academic labs and helping with documentation, support, and distribution. More recently, we have been working on bringing high-quality manufacturing, distribution, warranty, and support to open source tools by partnering with OEPS in Portugal. We are now also establishing standards that make it possible to combine methods, such as miniaturized microscopes, electrode drive implants, and silicon probes seamlessly in one system. In the longer term, our development of new tools, interfaces and our standardization efforts have the goal of making it possible for scientists to easily run complex experiments that span from complex behaviors and tasks, multiple recording modalities, to easy access to data processing pipelines.

community coverage

11 items

Seminar11
Domain spotlight

Explore how community research is advancing inside Open Source.

Visit domain