Software
software
Matthias H Hennig
We are looking for a postdoctoral researcher to develop new machine learning approaches for the analysis of large-scale extracellular recordings. The position is part of a wider effort to enable new discoveries with state-of-the-art electrode arrays and recording devices, and jointly supervised by Matthias Hennig and Matt Nolan. It offers a great opportunity to work with theoretical and experimental neuroscientists innovating open source tools and software for systems neuroscience.
N/A
FAA™ VaultMesh Brand Deployment — Developer Required. We are building a mass-scale HTML deployment system for a sovereign economic network called FAA™. The system includes over 80 brands under the Housing & Infrastructure sector, with 4 subnodes per brand and 12 HTML pages per node — totaling 3,840 FAA-compliant web pages. Your Role: You will be responsible for: Setting up a Cloudflare Workers deployment pipeline, Building out 12 FAA-standard pages per brand subnode (index.html, dashboard.html, pricing.html, etc.), Maintaining the FAA.zone public structure, especially under /sectors/housing/..., Integrating with existing admin panels at /admin/admin-portal.html and /admin/admin-portal-approval.html, Ensuring all files match Shapely visual styling (we will provide templates).
Open SPM: A Modular Framework for Scanning Probe Microscopy
OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.
Towards open meta-research in neuroimaging
When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/
“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”
Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.
LLMs and Human Language Processing
This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.
How Generative AI is Revolutionizing the Software Developer Industry
Generative AI is fundamentally transforming the software development industry by improving processes such as software testing, bug detection, bug fixes, and developer productivity. This talk explores how AI-driven techniques, particularly large language models (LLMs), are being utilized to generate realistic test scenarios, automate bug detection and repair, and streamline development workflows. As these technologies evolve, they promise to improve software quality and efficiency significantly. The discussion will cover key methodologies, challenges, and the future impact of generative AI on the software development lifecycle, offering a comprehensive overview of its revolutionary potential in the industry.
State-of-the-Art Spike Sorting with SpikeInterface
This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.
The future of neuropsychology will be open, transdiagnostic, and FAIR - why it matters and how we can get there
Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles (Wilkinson et al., 2016), standards (Gorgolewski et al., 2016), and ontologies (Poldrack et al., 2011), as well as practices of meta-analysis (Yarkoni et al., 2011; Dockès et al., 2020) and data sharing (Gorgolewski et al., 2015). However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed (Weber et al., 2010; Siddiqiet al 2022). On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain (Price, 2018). Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce (e.g., Larivière et al., 2021). We are working with clinicians, neuroimagers, and software developers to develop an open source platform for the storage, sharing, synthesis and meta-analysis of human clinical data to the service of the clinical and cognitive neuroscience community so that the future of neuropsychology can be transdiagnostic, open, and FAIR. We call it neurocausal (https://neurocausal.github.io).
Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks
Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.
A Flexible Platform for Monitoring Cerebellum-Dependent Sensory Associative Learning
Climbing fiber inputs to Purkinje cells provide instructive signals critical for cerebellum-dependent associative learning. Studying these signals in head-fixed mice facilitates the use of imaging, electrophysiological, and optogenetic methods. Here, a low-cost behavioral platform (~$1000) was developed that allows tracking of associative learning in head-fixed mice that locomote freely on a running wheel. The platform incorporates two common associative learning paradigms: eyeblink conditioning and delayed tactile startle conditioning. Behavior is tracked using a camera and the wheel movement by a detector. We describe the components and setup and provide a detailed protocol for training and data analysis. This platform allows the incorporation of optogenetic stimulation and fluorescence imaging. The design allows a single host computer to control multiple platforms for training multiple animals simultaneously.
Measuring the Motions of Mice: Open source tracking with the KineMouse Wheel
Who says you can't reinvent the wheel?! This running wheel for head-fixed mice allows 3D reconstruction of body kinematics using a single camera and DeepLabCut (or similar) software. A lightweight, transparent polycarbonate floor and a mirror mounted on the inside allow two views to be captured simultaneously. All parts are commercially available or laser cut
It’s not over our heads: Why human language needs a body
n the ‘orthodox’ view, cognition has been seen as manipulation of symbolic, mental representations, separate from the body. This dualist Cartesian approach characterised much of twentieth-century thought and is still taken for granted by many people today. Language, too, has for a long time been treated across scientific domains as a system operating largely independently from perception, action, and the body (articulatory-perceptual organs notwithstanding). This could lead one into believing that to emulate linguistic behaviour, it would suffice to develop ‘software’ operating on abstract representations that would work on any computational machine. Yet the brain is not the sole problem-solving resource we have at our disposal. The disembodied picture is inaccurate for numerous reasons, which will be presented addressing the issue of the indissoluble link between cognition, language, body, and environment in understanding and learning. The talk will conclude with implications and suggestions for pedagogy, relevant for disciplines as diverse as instruction in language, mathematics, and sports.
How do protein-RNA condensates form and contribute to disease?
In recent years, it has become clear that intrinsically disordered regions (IDRs) of RBPs, and the structure of RNAs, often contribute to the condensation of RNPs. To understand the transcriptomic features of such RNP condensates, we’ve used an improved individual nucleotide resolution CLIP protocol (iiCLIP), which produces highly sensitive and specific data, and thus enables quantitative comparisons of interactions across conditions (Lee et al., 2021). This showed how the IDR-dependent condensation properties of TDP-43 specify its RNA binding and regulatory repertoire (Hallegger et al., 2021). Moreover, we developed software for discovery and visualisation of RNA binding motifs that uncovered common binding patterns of RBPs on long multivalent RNA regions that are composed of dispersed motif clusters (Kuret et al, 2021). Finally, we used hybrid iCLIP (hiCLIP) to characterise the RNA structures mediating the assembly of Staufen RNPs across mammalian brain development, which demonstrated the roles of long-range RNA duplexes in the compaction of long 3’UTRs. I will present how the combined analysis of the characteristics of IDRs in RBPs, multivalent RNA regions and RNA structures is required to understand the formation and functions of RNP condensates, and how they change in diseases.
Mesmerize: A blueprint for shareable and reproducible analysis of calcium imaging data
Mesmerize is a platform for the annotation and analysis of neuronal calcium imaging data. Mesmerize encompasses the entire process of calcium imaging analysis from raw data to interactive visualizations. Mesmerize allows you to create FAIR-functionally linked datasets that are easy to share. The analysis tools are applicable for a broad range of biological experiments and come with GUI interfaces that can be used without requiring a programming background.
Deception, ExoNETs, SmushWare & Organic Data: Tech-facilitated neurorehabilitation & human-machine training
Making use of visual display technology and human-robotic interfaces, many researchers have illustrated various opportunities to distort visual and physical realities. We have had success with interventions such as error augmentation, sensory crossover, and negative viscosity. Judicial application of these techniques leads to training situations that enhance the learning process and can restore movement ability after neural injury. I will trace out clinical studies that have employed such technologies to improve the health and function, as well as share some leading-edge insights that include deceiving the patient, moving the "smarts" of software into the hardware, and examining clinical effectiveness
NMC4 Short Talk: What can 140,000 Reaches Tell Us About Demographic Contributions to Visuomotor Adaptation?
Motor learning is typically assessed in the lab, affording a high degree of control over the task environment. However, this level of control often comes at the cost of smaller sample sizes and a homogenous pool of participants (e.g. college students). To address this, we have designed a web-based motor learning experiment, making it possible to reach a larger, more diverse set of participants. As a proof-of-concept, we collected 1,581 participants completing a visuomotor rotation task, where participants controlled a visual cursor on the screen with their mouse and trackpad. Motor learning was indexed by how fast participants were able to compensate for a 45° rotation imposed between the cursor and their actual movement. Using a cross-validated LASSO regression, we found that motor learning varied significantly with the participant’s age and sex, and also strongly correlated with the location of the target, visual acuity, and satisfaction with the experiment. In contrast, participants' mouse and browser type were features eliminated by the model, indicating that motor performance was not influenced by variations in computer hardware and software. Together, this proof-of-concept study demonstrates how large datasets can generate important insights into the factors underlying motor learning.
GuPPy, a Python toolbox for the analysis of fiber photometry data
Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.
Efficient GPU training of SNNs using approximate RTRL
Last year’s SNUFA workshop report concluded “Moving toward neuron numbers comparable with biology and applying these networks to real-world data-sets will require the development of novel algorithms, software libraries, and dedicated hardware accelerators that perform well with the specifics of spiking neural networks” [1]. Taking inspiration from machine learning libraries — where techniques such as parallel batch training minimise latency and maximise GPU occupancy — as well as our previous research on efficiently simulating SNNs on GPUs for computational neuroscience [2,3], we are extending our GeNN SNN simulator to pursue this vision. To explore GeNN’s potential, we use the eProp learning rule [4] — which approximates RTRL — to train SNN classifiers on the Spiking Heidelberg Digits and the Spiking Sequential MNIST datasets. We find that the performance of these classifiers is comparable to those trained using BPTT [5] and verify that the theoretical advantages of neuron models with adaptation dynamics [5] translate to improved classification performance. We then measured execution times and found that training an SNN classifier using GeNN and eProp becomes faster than SpyTorch and BPTT after less than 685 timesteps and much larger models can be trained on the same GPU when using GeNN. Furthermore, we demonstrate that our implementation of parallel batch training improves training performance by over 4⨉ and enables near-perfect scaling across multiple GPUs. Finally, we show that performing inference using a recurrent SNN using GeNN uses less energy and has lower latency than a comparable LSTM simulated with TensorFlow [6].
Autopilot v0.4.0 - Distributing development of a distributed experimental framework
Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.
Creating and controlling visual environments using BonVision
Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices.
Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software
Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.
Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning
Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.
SpikeInterface
Much development has been directed toward improving the performance and automation of spike sorting. This continuous development, while essential, has contributed to an over-saturation of new, incompatible tools that hinders rigorous benchmarking and complicates reproducible analysis. To address these limitations, we developed SpikeInterface, a Python framework designed to unify preexisting spike sorting technologies into a single codebase and to facilitate straightforward comparison and adoption of different approaches. With a few lines of code, researchers can reproducibly run, compare, and benchmark most modern spike sorting algorithms; pre-process, post-process, and visualize extracellular datasets; validate, curate, and export sorting outputs; and more. In this presentation, I will provide an overview of SpikeInterface and, with applications to real and simulated datasets, demonstrate how it can be utilized to reduce the burden of manual curation and to more comprehensively benchmark automated spike sorters.
Kilosort
Kilosort is a spike sorting pipeline for large-scale electrophysiology. Advances in silicon probe technology mean that in vivo electrophysiological recordings from hundreds of channels will soon become commonplace. To interpret these recordings we need fast, scalable and accurate methods for spike sorting, whose output requires minimal time for manual curation. Kilosort is a spike sorting framework that meets these criteria, and show that it allows rapid and accurate sorting of large-scale in vivo data. Kilosort models the recorded voltage as a sum of template waveforms triggered on the spike times, allowing overlapping spikes to be identified and resolved. Rapid processing is achieved thanks to a novel low-dimensional approximation for the spatiotemporal distribution of each template, and to batch-based optimization on GPUs. Kilosort is an important step towards fully automated spike sorting of multichannel electrode recordings, and is freely available.
Bayesian distributional regression models for cognitive science
The assumed data generating models (response distributions) of experimental or observational data in cognitive science have become increasingly complex over the past decades. This trend follows a revolution in model estimation methods and a drastic increase in computing power available to researchers. Today, higher-level cognitive functions can well be captured by and understood through computational cognitive models, a common example being drift diffusion models for decision processes. Such models are often expressed as the combination of two modeling layers. The first layer is the response distribution with corresponding distributional parameters tailored to the cognitive process under investigation. The second layer are latent models of the distributional parameters that capture how those parameters vary as a function of design, stimulus, or person characteristics, often in an additive manner. Such cognitive models can thus be understood as special cases of distributional regression models where multiple distributional parameters, rather than just a single centrality parameter, are predicted by additive models. Because of their complexity, distributional models are quite complicated to estimate, but recent advances in Bayesian estimation methods and corresponding software make them increasingly more feasible. In this talk, I will speak about the specification, estimation, and post-processing of Bayesian distributional regression models and how they can help to better understand cognitive processes.
From 1D to 5D: Data-driven Discovery of Whole-brain Dynamic Connectivity in fMRI Data
The analysis of functional magnetic resonance imaging (fMRI) data can greatly benefit from flexible analytic approaches. In particular, the advent of data-driven approaches to identify whole-brain time-varying connectivity and activity has revealed a number of interesting relevant variation in the data which, when ignored, can provide misleading information. In this lecture I will provide a comparative introduction of a range of data-driven approaches to estimating time-varying connectivity. I will also present detailed examples where studies of both brain health and disorder have been advanced by approaches designed to capture and estimate time-varying information in resting fMRI data. I will review several exemplar data sets analyzed in different ways to demonstrate the complementarity as well as trade-offs of various modeling approaches to answer questions about brain function. Finally, I will review and provide examples of strategies for validating time-varying connectivity including simulations, multimodal imaging, and comparative prediction within clinical populations, among others. As part of the interactive aspect I will provide a hands-on guide to the dynamic functional network connectivity toolbox within the GIFT software, including an online didactic analytic decision tree to introduce the various concepts and decisions that need to be made when using such tools
BrainGlobe: a Python ecosystem for computational (neuro)anatomy
Neuroscientists routinely perform experiments aimed at recording or manipulating neural activity, uncovering physiological processes underlying brain function or elucidating aspects of brain anatomy. Understanding how the brain generates behaviour ultimately depends on merging the results of these experiments into a unified picture of brain anatomy and function. We present BrainGlobe, a new initiative aimed at developing common Python tools for computational neuroanatomy. These include cellfinder for fast, accurate cell detection in whole-brain microscopy images, brainreg for aligning images to a reference atlas, and brainrender for visualisation of anatomically registered data. These software packages are developed around the BrainGlobe Atlas API. This API provides a common Python interface to download and interact with reference brain atlases from multiple species (including human, mouse and larval zebrafish). This allows software to be developed agnostic to the atlas and species, increasing adoption and interoperability of software tools in neuroscience.
A macaque connectome for simulating large-scale network dynamics in The VirtualBrain
TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.
NeuroFedora: Free software for Free Neuroscience
NeuroFedora is an initiative to provide a ready to use Fedora Linux based Free/Open source software platform for neuroscience.
A discussion on the necessity for Open Source Hardware in neuroscience research
Research tools are paramount for scientific development, they enable researchers to observe and manipulate natural phenomena, learn their principles, make predictions and develop new technologies, treatments and improve living standards. Due to their costs and the geographical distribution of manufacturing companies access to them is not widely available, hindering the pace of research, the ability of many communities to contribute to science and education and reap its benefits. One possible solution for this issue is to create research tools under the open source ethos, where all documentation about them (including their designs, building and operating instructions) are made freely available. Dubbed Open Science Hardware (OSH), this production method follows the established and successful principles of open source software and brings many advantages over traditional creation methods such as: economic savings (see Pearce 2020 for potential economic savings in developing open source research tools), distributed manufacturing, repairability, and higher customizability. This development method has been greatly facilitated by recent technological developments in fast prototyping tools, Internet infrastructure, documentation platforms and lower costs of electronic off-the-shelf components. Taken together these benefits have the potential to make research more inclusive, equitable, distributed and most importantly, more reliable and reproducible, as - 1) researchers can know their tools inner workings in minute detail - 2) they can calibrate their tools before every experiment and having them running in optimal condition everytime - 3) given their lower price point, a)students can be trained/taught with hands on classes, b) several copies of the same instrument can be built leading to a parallelization of data collection and the creation of more robust datasets. - 4) Labs across the world can share the exact same type of instruments and create collaborative projects with standardized data collection and sharing.
Neural network-like collective dynamics in molecules
Neural networks can learn and recognize subtle correlations in high dimensional inputs. However, neural networks are simply many-body systems with strong non-linearities and disordered interactions. Hence, many-body physical systems with similar interactions should be able to show neural network-like behavior. Here we show neural network-like behavior in the nucleation dynamics of promiscuously interacting molecules with multiple stable crystalline phases. Using a combination of theory and experiments, we show how the physics of the system dictates relationships between the difficulty of the pattern recognition task solved, time taken and accuracy. This work shows that high dimensional pattern recognition and learning are not special to software algorithms but can be achieved by the collective dynamics of sufficiently disordered molecular systems.
Fast and deep neuromorphic learning with time-to-first-spike coding
Engineered pattern-recognition systems strive for short time-to-solution and low energy-to-solution characteristics. This represents one of the main driving forces behind the development of neuromorphic devices. For both them and their biological archetypes, this corresponds to using as few spikes as early as possible. The concept of few and early spikes is used as the founding principle in the time-to-first-spike coding scheme. Within this framework, we have developed a spike-timing-based learning algorithm, which we used to train neuronal networks on the mixed-signal neuromorphic platform BrainScaleS-2. We derive, from first principles, error-backpropagation-based learning in networks of leaky integrate-and-fire (LIF) neurons relying only on spike times, for specific configurations of neuronal and synaptic time constants. We explicitly examine applicability to neuromorphic substrates by studying the effects of reduced weight precision and range, as well as of parameter noise. We demonstrate the feasibility of our approach on continuous and discrete data spaces, both in software simulations and on BrainScaleS-2. This narrows the gap between previous models of first-spike-time learning and biological neuronal dynamics and paves the way for fast and energy-efficient neuromorphic applications.
Using Nengo and the Neural Engineering Framework to Represent Time and Space
The Neural Engineering Framework (and the associated software tool Nengo) provide a general method for converting algorithms into neural networks with an adjustable level of biological plausibility. I will give an introduction to this approach, and then focus on recent developments that have shown new insights into how brains represent time and space. This will start with the underlying mathematical formulation of ideal methods for representing continuous time and continuous space, then show how implementing these in neural networks can improve Machine Learning tasks, and finally show how the resulting systems compare to temporal and spatial representations in biological brains.
Open Neuroscience: Challenging scientific barriers with Open Source & Open Science tools
The Open Science movement advocates for more transparent, equitable and reliable science. It focusses on improving existing infrastructures and spans all aspects of the scientific process, from implementing systems that reward pre-registering studies and guarantee their publication, all the way to making research data citable and freely available. In this context, open source tools (and the development ethos supporting them) are becoming more and more present in academic labs, as researchers are realizing that they can improve the quality of their work, while cutting costs. In this talk an overview of OS tools for neuroscience will be given, with a focus on software and hardware, and how their use can bring scientific independence and make research evolve faster.
AI-assisted annotation of rodent behaviors: Collaboration of the human observer and SmartAnnotator software through active learning
FENS Forum 2024
A comprehensive software suite for data standardization and archiving
FENS Forum 2024
NeuroDeblur: A novel software for fast deconvolution of large light-sheet, confocal, or bright-field microscopy stacks
FENS Forum 2024
SiliFish 3.0: A software tool to model swimming behavior and its application for modeling swimming speed microcircuits in larval zebrafish
FENS Forum 2024