SIM
Latest
The SIMple microscope: Development of a fibre-based platform for accessible SIM imaging in unconventional environments
Advancements in imaging speed, depth and resolution have made structured illumination microscopy (SIM) an increasingly powerful optical sectioning (OS) and super-resolution (SR) technique, but these developments remain inaccessible to many life science researchers due to the cost, optical complexity and delicacy of these instruments. We address these limitations by redesigning the optical path using in-line fibre components that are compact, lightweight and easily assembled in a “Plug & Play” modality, without compromising imaging performance. They can be integrated into an existing widefield microscope with a minimum of optical components and alignment, making OS-SIM more accessible to researchers with less optics experience. We also demonstrate a complete SR-SIM imaging system with dimensions 300 mm × 300 mm × 450 mm. We propose to enable accessible SIM imaging by utilising its compact, lightweight and robust design to transport it where it is needed, and image in “unconventional” environments where factors such as temperature and biosafety considerations currently limit imaging experiments.
Towards open meta-research in neuroimaging
When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/
“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”
Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.
Computational Imaging: Augmenting Optics with Algorithms for Biomedical Microscopy and Neural Imaging
Computational imaging seeks to achieve novel capabilities and overcome conventional limitations by combining optics and algorithms. In this seminar, I will discuss two computational imaging technologies developed in Boston University Computational Imaging Systems lab, including Intensity Diffraction Tomography and Computational Miniature Mesoscope. In our intensity diffraction tomography system, we demonstrate 3D quantitative phase imaging on a simple LED array microscope. We develop both single-scattering and multiple-scattering models to image complex biological samples. In our Computational Miniature Mesoscope, we demonstrate single-shot 3D high-resolution fluorescence imaging across a wide field-of-view in a miniaturized platform. We develop methods to characterize 3D spatially varying aberrations and physical simulator-based deep learning strategies to achieve fast and accurate reconstructions. Broadly, I will discuss how synergies between novel optical instrumentation, physical modeling, and model- and learning-based computational algorithms can push the limits in biomedical microscopy and neural imaging.
A Flexible Platform for Monitoring Cerebellum-Dependent Sensory Associative Learning
Climbing fiber inputs to Purkinje cells provide instructive signals critical for cerebellum-dependent associative learning. Studying these signals in head-fixed mice facilitates the use of imaging, electrophysiological, and optogenetic methods. Here, a low-cost behavioral platform (~$1000) was developed that allows tracking of associative learning in head-fixed mice that locomote freely on a running wheel. The platform incorporates two common associative learning paradigms: eyeblink conditioning and delayed tactile startle conditioning. Behavior is tracked using a camera and the wheel movement by a detector. We describe the components and setup and provide a detailed protocol for training and data analysis. This platform allows the incorporation of optogenetic stimulation and fluorescence imaging. The design allows a single host computer to control multiple platforms for training multiple animals simultaneously.
Measuring the Motions of Mice: Open source tracking with the KineMouse Wheel
Who says you can't reinvent the wheel?! This running wheel for head-fixed mice allows 3D reconstruction of body kinematics using a single camera and DeepLabCut (or similar) software. A lightweight, transparent polycarbonate floor and a mirror mounted on the inside allow two views to be captured simultaneously. All parts are commercially available or laser cut
Open-source neurotechnologies for imaging cortex-wide neural activity in behaving animals
Neural computations occurring simultaneously in multiple cerebral cortical regions are critical for mediating behaviors. Progress has been made in understanding how neural activity in specific cortical regions contributes to behavior. However, there is a lack of tools that allow simultaneous monitoring and perturbing neural activity from multiple cortical regions. We have engineered a suite of technologies to enable easy, robust access to much of the dorsal cortex of mice for optical and electrophysiological recordings. First, I will describe microsurgery robots that can programmed to perform delicate microsurgical procedures such as large bilateral craniotomies across the cortex and skull thinning in a semi-automated fashion. Next, I will describe digitally designed, morphologically realistic, transparent polymer skulls that allow long-term (+300 days) optical access. These polymer skulls allow mesoscopic imaging, as well as cellular and subcellular resolution two-photon imaging of neural structures up to 600 µm deep. We next engineered a widefield, miniaturized, head-mounted fluorescence microscope that is compatible with transparent polymer skull preparations. With a field of view of 8 × 10 mm2 and weighing less than 4 g, the ‘mini-mScope’ can image most of the mouse dorsal cortex with resolutions ranging from 39 to 56 µm. We used the mini-mScope to record mesoscale calcium activity across the dorsal cortex during sensory-evoked stimuli, open field behaviors, social interactions and transitions from wakefulness to sleep.
GeNN
Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.
Building a Simple and Versatile Illumination System for Optogenetic Experiments
Controlling biological processes using light has increased the accuracy and speed with which researchers can manipulate many biological processes. Optical control allows for an unprecedented ability to dissect function and holds the potential for enabling novel genetic therapies. However, optogenetic experiments require adequate light sources with spatial, temporal, or intensity control, often a bottleneck for researchers. Here we detail how to build a low-cost and versatile LED illumination system that is easily customizable for different available optogenetic tools. This system is configurable for manual or computer control with adjustable LED intensity. We provide an illustrated step-by-step guide for building the circuit, making it computer-controlled, and constructing the LEDs. To facilitate the assembly of this device, we also discuss some basic soldering techniques and explain the circuitry used to control the LEDs. Using our open-source user interface, users can automate precise timing and pulsing of light on a personal computer (PC) or an inexpensive tablet. This automation makes the system useful for experiments that use LEDs to control genes, signaling pathways, and other cellular activities that span large time scales. For this protocol, no prior expertise in electronics is required to build all the parts needed or to use the illumination system to perform optogenetic experiments.
The Open-Source UCLA Miniscope Project
The Miniscope Project -- an open-source collaborative effort—was created to accelerate innovation of miniature microscope technology and to increase global access to this technology. Currently, we are working on advancements ranging from optogenetic stimulation and wire-free operation to simultaneous optical and electrophysiological recording. Using these systems, we have uncovered mechanisms underlying temporal memory linking and investigated causes of cognitive deficits in temporal lobe epilepsy. Through innovation and optimization, this work aims to extend the reach of neuroscience research and create new avenues of scientific inquiry.
Get more from your ISH brain slices with Stalefish
The standard method for staining structures in the brain is to slice the brain into 2D sections. Each slice is treated using a technique such as in-situ hybridization to examine the spatial expression of a particular molecule at a given developmental timepoint. Depending on the brain structures being studied, slices can be made coronally, sagitally, or at any angle that is thought to be optimal for analysis. However, assimilating the information presented in the 2D slice images to gain quantitiative and informative 3D expression patterns is challenging. Even if expression levels are presented as voxels, to give 3D expression clouds, it can be difficult to compare expression across individuals and analysing such data requires significant expertise and imagination. In this talk, I will describe a new approach to examining histology slices, in which the user defines the brain structure of interest by drawing curves around it on each slice in a set and the depth of tissue from which to sample expression. The sampled 'curves' are then assembled into a 3D surface, which can then be transformed onto a common reference frame for comparative analysis. I will show how other neuroscientists can obtain and use the tool, which is called Stalefish, to analyse their own image data with no (or minimal) changes to their slice preparation workflow.
SimBA for Behavioral Neuroscientists
Several excellent computational frameworks exist that enable high-throughput and consistent tracking of freely moving unmarked animals. SimBA introduce and distribute a plug-and play pipeline that enables users to use these pose-estimation approaches in combination with behavioral annotation for the generation of supervised machine-learning behavioral predictive classifiers. SimBA was developed for the analysis of complex social behaviors, but includes the flexibility for users to generate predictive classifiers across other behavioral modalities with minimal effort and no specialized computational background. SimBA has a variety of extended functions for large scale batch video pre-processing, generating descriptive statistics from movement features, and interactive modules for user-defined regions of interest and visualizing classification probabilities and movement patterns.
SpikeInterface
Much development has been directed toward improving the performance and automation of spike sorting. This continuous development, while essential, has contributed to an over-saturation of new, incompatible tools that hinders rigorous benchmarking and complicates reproducible analysis. To address these limitations, we developed SpikeInterface, a Python framework designed to unify preexisting spike sorting technologies into a single codebase and to facilitate straightforward comparison and adoption of different approaches. With a few lines of code, researchers can reproducibly run, compare, and benchmark most modern spike sorting algorithms; pre-process, post-process, and visualize extracellular datasets; validate, curate, and export sorting outputs; and more. In this presentation, I will provide an overview of SpikeInterface and, with applications to real and simulated datasets, demonstrate how it can be utilized to reduce the burden of manual curation and to more comprehensively benchmark automated spike sorters.
Suite2p: a multipurpose functional segmentation pipeline for cellular imaging
The combination of two-photon microscopy recordings and powerful calcium-dependent fluorescent sensors enables simultaneous recording of unprecedentedly large populations of neurons. While these sensors have matured over several generations of development, computational methods to process their fluorescence are often inefficient and the results hard to interpret. Here we introduce Suite2p: a fast, accurate, parameter-free and complete pipeline that registers raw movies, detects active and/or inactive cells (using Cellpose), extracts their calcium traces and infers their spike times. Suite2p runs faster than real time on standard workstations and outperforms state-of-the-art methods on newly developed ground-truth benchmarks for motion correction and cell detection.
A macaque connectome for simulating large-scale network dynamics in The VirtualBrain
TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.
SIM coverage
15 items