← Back

Software

Topic spotlight
TopicWorld Wide

software

Discover seminars, jobs, and research tagged with software across World Wide.
39 curated items33 Seminars4 ePosters2 Positions
Updated 1 day ago
39 items · software
39 results
Position

N/A

Heyns
Nationwide
Dec 5, 2025

FAA™ VaultMesh Brand Deployment — Developer Required. We are building a mass-scale HTML deployment system for a sovereign economic network called FAA™. The system includes over 80 brands under the Housing & Infrastructure sector, with 4 subnodes per brand and 12 HTML pages per node — totaling 3,840 FAA-compliant web pages. Your Role: You will be responsible for: Setting up a Cloudflare Workers deployment pipeline, Building out 12 FAA-standard pages per brand subnode (index.html, dashboard.html, pricing.html, etc.), Maintaining the FAA.zone public structure, especially under /sectors/housing/..., Integrating with existing admin panels at /admin/admin-portal.html and /admin/admin-portal-approval.html, Ensuring all files match Shapely visual styling (we will provide templates).

SeminarOpen Source

Open SPM: A Modular Framework for Scanning Probe Microscopy

Marcos Penedo Garcia
Senior scientist, LBNI-IBI, EPFL Lausanne, Switzerland
Jun 23, 2025

OpenSPM aims to democratize innovation in the field of scanning probe microscopy (SPM), which is currently dominated by a few proprietary, closed systems that limit user-driven development. Our platform includes a high-speed OpenAFM head and base optimized for small cantilevers, an OpenAFM controller, a high-voltage amplifier, and interfaces compatible with several commercial AFM systems such as the Bruker Multimode, Nanosurf DriveAFM, Witec Alpha SNOM, Zeiss FIB-SEM XB550, and Nenovision Litescope. We have created a fully documented and community-driven OpenSPM platform, with training resources and sourcing information, which has already enabled the construction of more than 15 systems outside our lab. The controller is integrated with open-source tools like Gwyddion, HDF5, and Pycroscopy. We have also engaged external companies, two of which are integrating our controller into their products or interfaces. We see growing interest in applying parts of the OpenSPM platform to related techniques such as correlated microscopy, nanoindentation, and scanning electron/confocal microscopy. To support this, we are developing more generic and modular software, alongside a structured development workflow. A key feature of the OpenSPM system is its Python-based API, which makes the platform fully scriptable and ideal for AI and machine learning applications. This enables, for instance, automatic control and optimization of PID parameters, setpoints, and experiment workflows. With a growing contributor base and industry involvement, OpenSPM is well positioned to become a global, open platform for next-generation SPM innovation.

SeminarOpen SourceRecording

Towards open meta-research in neuroimaging

Kendra Oudyk
ORIGAMI - Neural data science - https://neurodatascience.github.io/
Dec 8, 2024

When meta-research (research on research) makes an observation or points out a problem (such as a flaw in methodology), the project should be repeated later to determine whether the problem remains. For this we need meta-research that is reproducible and updatable, or living meta-research. In this talk, we introduce the concept of living meta-research, examine prequels to this idea, and point towards standards and technologies that could assist researchers in doing living meta-research. We introduce technologies like natural language processing, which can help with automation of meta-research, which in turn will make the research easier to reproduce/update. Further, we showcase our open-source litmining ecosystem, which includes pubget (for downloading full-text journal articles), labelbuddy (for manually extracting information), and pubextract (for automatically extracting information). With these tools, you can simplify the tedious data collection and information extraction steps in meta-research, and then focus on analyzing the text. We will then describe some living meta-research projects to illustrate the use of these tools. For example, we’ll show how we used GPT along with our tools to extract information about study participants. Essentially, this talk will introduce you to the concept of meta-research, some tools for doing meta-research, and some examples. Particularly, we want you to take away the fact that there are many interesting open questions in meta-research, and you can easily learn the tools to answer them. Check out our tools at https://litmining.github.io/

SeminarOpen Source

“Open Raman Microscopy (ORM): A modular Raman spectroscopy setup with an open-source controller”

Kevin Uning
London Centre for Nanotechnology (LCN), University College London (UCL)
Nov 28, 2024

Raman spectroscopy is a powerful technique for identifying chemical species by probing their vibrational energy levels, offering exceptional specificity with a relatively simple setup involving a laser source, spectrometer, and microscope/probe. However, the high cost of Raman systems lacking modularity often limits exploratory research hindering broader adoption. To address the need for an affordable, modular microscopy platform for multimodal imaging, we present a customizable confocal Raman spectroscopy setup alongside an open-source acquisition software, ORM (Open Raman Microscopy) Controller, developed in Python. This solution bridges the gap between expensive commercial systems and complex, custom-built setups used by specialist research groups. In this presentation, we will cover the components of the setup, the design rationale, assembly methods, limitations, and its modular potential for expanding functionality. Additionally, we will demonstrate ORM’s capabilities for instrument control, 2D and 3D Raman mapping, region-of-interest selection, and its adaptability to various instrument configurations. We will conclude by showcasing practical applications of this setup across different research fields.

SeminarNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King
Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure
Nov 28, 2024

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

SeminarPsychology

How Generative AI is Revolutionizing the Software Developer Industry

Luca Di Grazia
Università della Svizzera Italiana
Sep 30, 2024

Generative AI is fundamentally transforming the software development industry by improving processes such as software testing, bug detection, bug fixes, and developer productivity. This talk explores how AI-driven techniques, particularly large language models (LLMs), are being utilized to generate realistic test scenarios, automate bug detection and repair, and streamline development workflows. As these technologies evolve, they promise to improve software quality and efficiency significantly. The discussion will cover key methodologies, challenges, and the future impact of generative AI on the software development lifecycle, offering a comprehensive overview of its revolutionary potential in the industry.

SeminarNeuroscienceRecording

State-of-the-Art Spike Sorting with SpikeInterface

Samuel Garcia and Alessio Buccino
CRNS, Lyon, France and Allen Institute for Neural Dynamics, Seattle, USA
Nov 6, 2023

This webinar will focus on spike sorting analysis with SpikeInterface, an open-source framework for the analysis of extracellular electrophysiology data. After a brief introduction of the project (~30 mins) highlighting the basics of the SpikeInterface software and advanced features (e.g., data compression, quality metrics, drift correction, cloud visualization), we will have an extensive hands-on tutorial (~90 mins) showing how to use SpikeInterface in a real-world scenario. After attending the webinar, you will: (1) have a global overview of the different steps involved in a processing pipeline; (2) know how to write a complete analysis pipeline with SpikeInterface.

SeminarPsychology

The future of neuropsychology will be open, transdiagnostic, and FAIR - why it matters and how we can get there

Valentina Borghesani
University of Geneva
Nov 29, 2022

Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles (Wilkinson et al., 2016), standards (Gorgolewski et al., 2016), and ontologies (Poldrack et al., 2011), as well as practices of meta-analysis (Yarkoni et al., 2011; Dockès et al., 2020) and data sharing (Gorgolewski et al., 2015). However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed (Weber et al., 2010; Siddiqiet al 2022). On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain (Price, 2018). Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce (e.g., Larivière et al., 2021). We are working with clinicians, neuroimagers, and software developers to develop an open source platform for the storage, sharing, synthesis and meta-analysis of human clinical data to the service of the clinical and cognitive neuroscience community so that the future of neuropsychology can be transdiagnostic, open, and FAIR. We call it neurocausal (https://neurocausal.github.io).

SeminarNeuroscience

Brian2CUDA: Generating Efficient CUDA Code for Spiking Neural Networks

Denis Alevi
Berlin Institute of Technology (
Nov 2, 2022

Graphics processing units (GPUs) are widely available and have been used with great success to accelerate scientific computing in the last decade. These advances, however, are often not available to researchers interested in simulating spiking neural networks, but lacking the technical knowledge to write the necessary low-level code. Writing low-level code is not necessary when using the popular Brian simulator, which provides a framework to generate efficient CPU code from high-level model definitions in Python. Here, we present Brian2CUDA, an open-source software that extends the Brian simulator with a GPU backend. Our implementation generates efficient code for the numerical integration of neuronal states and for the propagation of synaptic events on GPUs, making use of their massively parallel arithmetic capabilities. We benchmark the performance improvements of our software for several model types and find that it can accelerate simulations by up to three orders of magnitude compared to Brian’s CPU backend. Currently, Brian2CUDA is the only package that supports Brian’s full feature set on GPUs, including arbitrary neuron and synapse models, plasticity rules, and heterogeneous delays. When comparing its performance with Brian2GeNN, another GPU-based backend for the Brian simulator with fewer features, we find that Brian2CUDA gives comparable speedups, while being typically slower for small and faster for large networks. By combining the flexibility of the Brian simulator with the simulation speed of GPUs, Brian2CUDA enables researchers to efficiently simulate spiking neural networks with minimal effort and thereby makes the advancements of GPU computing available to a larger audience of neuroscientists.

SeminarOpen SourceRecording

A Flexible Platform for Monitoring Cerebellum-Dependent Sensory Associative Learning

Gerard Joey Broussard
Princeton Neuroscience Institute
May 31, 2022

Climbing fiber inputs to Purkinje cells provide instructive signals critical for cerebellum-dependent associative learning. Studying these signals in head-fixed mice facilitates the use of imaging, electrophysiological, and optogenetic methods. Here, a low-cost behavioral platform (~$1000) was developed that allows tracking of associative learning in head-fixed mice that locomote freely on a running wheel. The platform incorporates two common associative learning paradigms: eyeblink conditioning and delayed tactile startle conditioning. Behavior is tracked using a camera and the wheel movement by a detector. We describe the components and setup and provide a detailed protocol for training and data analysis. This platform allows the incorporation of optogenetic stimulation and fluorescence imaging. The design allows a single host computer to control multiple platforms for training multiple animals simultaneously.

SeminarNeuroscience

It’s not over our heads: Why human language needs a body

Michał B. Paradowski
Institute of Applied Linguistics, University of Warsaw
May 8, 2022

n the ‘orthodox’ view, cognition has been seen as manipulation of symbolic, mental representations, separate from the body. This dualist Cartesian approach characterised much of twentieth-century thought and is still taken for granted by many people today. Language, too, has for a long time been treated across scientific domains as a system operating largely independently from perception, action, and the body (articulatory-perceptual organs notwithstanding). This could lead one into believing that to emulate linguistic behaviour, it would suffice to develop ‘software’ operating on abstract representations that would work on any computational machine. Yet the brain is not the sole problem-solving resource we have at our disposal. The disembodied picture is inaccurate for numerous reasons, which will be presented addressing the issue of the indissoluble link between cognition, language, body, and environment in understanding and learning. The talk will conclude with implications and suggestions for pedagogy, relevant for disciplines as diverse as instruction in language, mathematics, and sports.

SeminarNeuroscience

How do protein-RNA condensates form and contribute to disease?

Jernej Ule
UK Dementia Research Institute
May 5, 2022

In recent years, it has become clear that intrinsically disordered regions (IDRs) of RBPs, and the structure of RNAs, often contribute to the condensation of RNPs. To understand the transcriptomic features of such RNP condensates, we’ve used an improved individual nucleotide resolution CLIP protocol (iiCLIP), which produces highly sensitive and specific data, and thus enables quantitative comparisons of interactions across conditions (Lee et al., 2021). This showed how the IDR-dependent condensation properties of TDP-43 specify its RNA binding and regulatory repertoire (Hallegger et al., 2021). Moreover, we developed software for discovery and visualisation of RNA binding motifs that uncovered common binding patterns of RBPs on long multivalent RNA regions that are composed of dispersed motif clusters (Kuret et al, 2021). Finally, we used hybrid iCLIP (hiCLIP) to characterise the RNA structures mediating the assembly of Staufen RNPs across mammalian brain development, which demonstrated the roles of long-range RNA duplexes in the compaction of long 3’UTRs. I will present how the combined analysis of the characteristics of IDRs in RBPs, multivalent RNA regions and RNA structures is required to understand the formation and functions of RNP condensates, and how they change in diseases.

SeminarOpen SourceRecording

Mesmerize: A blueprint for shareable and reproducible analysis of calcium imaging data

Kushal Kolar
University of North Carolina at Chapel Hill
Apr 5, 2022

Mesmerize is a platform for the annotation and analysis of neuronal calcium imaging data. Mesmerize encompasses the entire process of calcium imaging analysis from raw data to interactive visualizations. Mesmerize allows you to create FAIR-functionally linked datasets that are easy to share. The analysis tools are applicable for a broad range of biological experiments and come with GUI interfaces that can be used without requiring a programming background.

SeminarNeuroscienceRecording

NMC4 Short Talk: What can 140,000 Reaches Tell Us About Demographic Contributions to Visuomotor Adaptation?

Hrach Asmerian
University of California, Berkeley
Dec 1, 2021

Motor learning is typically assessed in the lab, affording a high degree of control over the task environment. However, this level of control often comes at the cost of smaller sample sizes and a homogenous pool of participants (e.g. college students). To address this, we have designed a web-based motor learning experiment, making it possible to reach a larger, more diverse set of participants. As a proof-of-concept, we collected 1,581 participants completing a visuomotor rotation task, where participants controlled a visual cursor on the screen with their mouse and trackpad. Motor learning was indexed by how fast participants were able to compensate for a 45° rotation imposed between the cursor and their actual movement. Using a cross-validated LASSO regression, we found that motor learning varied significantly with the participant’s age and sex, and also strongly correlated with the location of the target, visual acuity, and satisfaction with the experiment. In contrast, participants' mouse and browser type were features eliminated by the model, indicating that motor performance was not influenced by variations in computer hardware and software. Together, this proof-of-concept study demonstrates how large datasets can generate important insights into the factors underlying motor learning.

SeminarOpen SourceRecording

GuPPy, a Python toolbox for the analysis of fiber photometry data

Talia Lerner
Northwestern University
Nov 23, 2021

Fiber photometry (FP) is an adaptable method for recording in vivo neural activity in freely behaving animals. It has become a popular tool in neuroscience due to its ease of use, low cost, the ability to combine FP with freely moving behavior, among other advantages. However, analysis of FP data can be a challenge for new users, especially those with a limited programming background. Here, we present Guided Photometry Analysis in Python (GuPPy), a free and open-source FP analysis tool. GuPPy is provided as a Jupyter notebook, a well-commented interactive development environment (IDE) designed to operate across platforms. GuPPy presents the user with a set of graphic user interfaces (GUIs) to load data and provide input parameters. Graphs produced by GuPPy can be exported into various image formats for integration into scientific figures. As an open-source tool, GuPPy can be modified by users with knowledge of Python to fit their specific needs.

SeminarNeuroscienceRecording

Efficient GPU training of SNNs using approximate RTRL

James Knight
University of Sussex
Nov 2, 2021

Last year’s SNUFA workshop report concluded “Moving toward neuron numbers comparable with biology and applying these networks to real-world data-sets will require the development of novel algorithms, software libraries, and dedicated hardware accelerators that perform well with the specifics of spiking neural networks” [1]. Taking inspiration from machine learning libraries — where techniques such as parallel batch training minimise latency and maximise GPU occupancy — as well as our previous research on efficiently simulating SNNs on GPUs for computational neuroscience [2,3], we are extending our GeNN SNN simulator to pursue this vision. To explore GeNN’s potential, we use the eProp learning rule [4] — which approximates RTRL — to train SNN classifiers on the Spiking Heidelberg Digits and the Spiking Sequential MNIST datasets. We find that the performance of these classifiers is comparable to those trained using BPTT [5] and verify that the theoretical advantages of neuron models with adaptation dynamics [5] translate to improved classification performance. We then measured execution times and found that training an SNN classifier using GeNN and eProp becomes faster than SpyTorch and BPTT after less than 685 timesteps and much larger models can be trained on the same GPU when using GeNN. Furthermore, we demonstrate that our implementation of parallel batch training improves training performance by over 4⨉ and enables near-perfect scaling across multiple GPUs. Finally, we show that performing inference using a recurrent SNN using GeNN uses less energy and has lower latency than a comparable LSTM simulated with TensorFlow [6].

SeminarOpen SourceRecording

Autopilot v0.4.0 - Distributing development of a distributed experimental framework

Jonny Saunders
University of Oregon
Sep 28, 2021

Autopilot is a Python framework for performing complex behavioral neuroscience experiments by coordinating a swarm of Raspberry Pis. It was designed to not only give researchers a tool that allows them to perform the hardware-intensive experiments necessary for the next generation of naturalistic neuroscientific observation, but also to make it easier for scientists to be good stewards of the human knowledge project. Specifically, we designed Autopilot as a framework that lets its users contribute their technical expertise to a cumulative library of hardware interfaces and experimental designs, and produce data that is clean at the time of acquisition to lower barriers to open scientific practices. As autopilot matures, we have been progressively making these aspirations a reality. Currently we are preparing the release of Autopilot v0.4.0, which will include a new plugin system and wiki that makes use of semantic web technology to make a technical and contextual knowledge repository. By combining human readable text and semantic annotations in a wiki that makes contribution as easy as possible, we intend to make a communal knowledge system that gives a mechanism for sharing the contextual technical knowledge that is always excluded from methods sections, but is nonetheless necessary to perform cutting-edge experiments. By integrating it with Autopilot, we hope to make a first of its kind system that allows researchers to fluidly blend technical knowledge and open source hardware designs with the software necessary to use them. Reciprocally, we also hope that this system will support a kind of deep provenance that makes abstract "custom apparatus" statements in methods sections obsolete, allowing the scientific community to losslessly and effortlessly trace a dataset back to the code and hardware designs needed to replicate it. I will describe the basic architecture of Autopilot, recent work on its community contribution ecosystem, and the vision for the future of its development.

SeminarOpen SourceRecording

Creating and controlling visual environments using BonVision

Aman Saleem
University College London
Sep 14, 2021

Real-time rendering of closed-loop visual environments is important for next-generation understanding of brain function and behaviour, but is often prohibitively difficult for non-experts to implement and is limited to few laboratories worldwide. We developed BonVision as an easy-to-use open-source software for the display of virtual or augmented reality, as well as standard visual stimuli. BonVision has been tested on humans and mice, and is capable of supporting new experimental designs in other animal models of vision. As the architecture is based on the open-source Bonsai graphical programming language, BonVision benefits from native integration with experimental hardware. BonVision therefore enables easy implementation of closed-loop experiments, including real-time interaction with deep neural networks, and communication with behavioural and physiological measurement and manipulation devices.

SeminarNeuroscienceRecording

Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software

Stephanie Jones
Brown University
Sep 7, 2021

Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.

SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 26, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.

SeminarOpen SourceRecording

SpikeInterface

Alessio Buccino
ETH Zurich
Jun 10, 2021

Much development has been directed toward improving the performance and automation of spike sorting. This continuous development, while essential, has contributed to an over-saturation of new, incompatible tools that hinders rigorous benchmarking and complicates reproducible analysis. To address these limitations, we developed SpikeInterface, a Python framework designed to unify preexisting spike sorting technologies into a single codebase and to facilitate straightforward comparison and adoption of different approaches. With a few lines of code, researchers can reproducibly run, compare, and benchmark most modern spike sorting algorithms; pre-process, post-process, and visualize extracellular datasets; validate, curate, and export sorting outputs; and more. In this presentation, I will provide an overview of SpikeInterface and, with applications to real and simulated datasets, demonstrate how it can be utilized to reduce the burden of manual curation and to more comprehensively benchmark automated spike sorters.

SeminarOpen SourceRecording

Kilosort

Marius Pachitariu
HHMI Janelia Research Campus
May 27, 2021

Kilosort is a spike sorting pipeline for large-scale electrophysiology. Advances in silicon probe technology mean that in vivo electrophysiological recordings from hundreds of channels will soon become commonplace. To interpret these recordings we need fast, scalable and accurate methods for spike sorting, whose output requires minimal time for manual curation. Kilosort is a spike sorting framework that meets these criteria, and show that it allows rapid and accurate sorting of large-scale in vivo data. Kilosort models the recorded voltage as a sum of template waveforms triggered on the spike times, allowing overlapping spikes to be identified and resolved. Rapid processing is achieved thanks to a novel low-dimensional approximation for the spatiotemporal distribution of each template, and to batch-based optimization on GPUs. Kilosort is an important step towards fully automated spike sorting of multichannel electrode recordings, and is freely available.

SeminarNeuroscience

Bayesian distributional regression models for cognitive science

Paul Bürkner
University of Stuttgart
May 25, 2021

The assumed data generating models (response distributions) of experimental or observational data in cognitive science have become increasingly complex over the past decades. This trend follows a revolution in model estimation methods and a drastic increase in computing power available to researchers. Today, higher-level cognitive functions can well be captured by and understood through computational cognitive models, a common example being drift diffusion models for decision processes. Such models are often expressed as the combination of two modeling layers. The first layer is the response distribution with corresponding distributional parameters tailored to the cognitive process under investigation. The second layer are latent models of the distributional parameters that capture how those parameters vary as a function of design, stimulus, or person characteristics, often in an additive manner. Such cognitive models can thus be understood as special cases of distributional regression models where multiple distributional parameters, rather than just a single centrality parameter, are predicted by additive models. Because of their complexity, distributional models are quite complicated to estimate, but recent advances in Bayesian estimation methods and corresponding software make them increasingly more feasible. In this talk, I will speak about the specification, estimation, and post-processing of Bayesian distributional regression models and how they can help to better understand cognitive processes.

SeminarNeuroscience

From 1D to 5D: Data-driven Discovery of Whole-brain Dynamic Connectivity in fMRI Data

Vince Calhoun
Founding Director, Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State, Georgia Tech, Emory, Atlanta, GA
May 19, 2021

The analysis of functional magnetic resonance imaging (fMRI) data can greatly benefit from flexible analytic approaches. In particular, the advent of data-driven approaches to identify whole-brain time-varying connectivity and activity has revealed a number of interesting relevant variation in the data which, when ignored, can provide misleading information. In this lecture I will provide a comparative introduction of a range of data-driven approaches to estimating time-varying connectivity. I will also present detailed examples where studies of both brain health and disorder have been advanced by approaches designed to capture and estimate time-varying information in resting fMRI data. I will review several exemplar data sets analyzed in different ways to demonstrate the complementarity as well as trade-offs of various modeling approaches to answer questions about brain function. Finally, I will review and provide examples of strategies for validating time-varying connectivity including simulations, multimodal imaging, and comparative prediction within clinical populations, among others. As part of the interactive aspect I will provide a hands-on guide to the dynamic functional network connectivity toolbox within the GIFT software, including an online didactic analytic decision tree to introduce the various concepts and decisions that need to be made when using such tools

SeminarOpen SourceRecording

A macaque connectome for simulating large-scale network dynamics in The VirtualBrain

Kelly Shen
University of Toronto
Apr 29, 2021

TheVirtualBrain (TVB; thevirtualbrain.org) is a software platform for simulating whole-brain network dynamics. TVB models link biophysical parameters at the cellular level with systems-level functional neuroimaging signals. Data available from animal models can provide vital constraints for the linkage across spatial and temporal scales. I will describe the construction of a macaque cortical connectome as an initial step towards a comprehensive multi-scale macaque TVB model. I will also describe our process of validating the connectome and show an example simulation of macaque resting-state dynamics using TVB. This connectome opens the opportunity for the addition of other available data from the macaque, such as electrophysiological recordings and receptor distributions, to inform multi-scale models of brain dynamics. Future work will include extensions to neurological conditions and other nonhuman primate species.

SeminarNeuroscienceRecording

A discussion on the necessity for Open Source Hardware in neuroscience research

Andre Maia Chagas
University of Sussex
Mar 28, 2021

Research tools are paramount for scientific development, they enable researchers to observe and manipulate natural phenomena, learn their principles, make predictions and develop new technologies, treatments and improve living standards. Due to their costs and the geographical distribution of manufacturing companies access to them is not widely available, hindering the pace of research, the ability of many communities to contribute to science and education and reap its benefits. One possible solution for this issue is to create research tools under the open source ethos, where all documentation about them (including their designs, building and operating instructions) are made freely available. Dubbed Open Science Hardware (OSH), this production method follows the established and successful principles of open source software and brings many advantages over traditional creation methods such as: economic savings (see Pearce 2020 for potential economic savings in developing open source research tools), distributed manufacturing, repairability, and higher customizability. This development method has been greatly facilitated by recent technological developments in fast prototyping tools, Internet infrastructure, documentation platforms and lower costs of electronic off-the-shelf components. Taken together these benefits have the potential to make research more inclusive, equitable, distributed and most importantly, more reliable and reproducible, as - 1) researchers can know their tools inner workings in minute detail - 2) they can calibrate their tools before every experiment and having them running in optimal condition everytime - 3) given their lower price point, a)students can be trained/taught with hands on classes, b) several copies of the same instrument can be built leading to a parallelization of data collection and the creation of more robust datasets. - 4) Labs across the world can share the exact same type of instruments and create collaborative projects with standardized data collection and sharing.

SeminarNeuroscience

Using Nengo and the Neural Engineering Framework to Represent Time and Space

Terry Stewart
University of Waterloo and National Research Center Canada
Jul 14, 2020

The Neural Engineering Framework (and the associated software tool Nengo) provide a general method for converting algorithms into neural networks with an adjustable level of biological plausibility. I will give an introduction to this approach, and then focus on recent developments that have shown new insights into how brains represent time and space. This will start with the underlying mathematical formulation of ideal methods for representing continuous time and continuous space, then show how implementing these in neural networks can improve Machine Learning tasks, and finally show how the resulting systems compare to temporal and spatial representations in biological brains.

ePoster

AI-assisted annotation of rodent behaviors: Collaboration of the human observer and SmartAnnotator software through active learning

Lucas Noldus, Elsbeth van Dam, Loes Ottink, Ruud Tegelenbosch, Marcel van Gerven

FENS Forum 2024

ePoster

A comprehensive software suite for data standardization and archiving

Alessandra Trapani, Paul Adkisson, Mayorquin Heberto, Ben Dichter

FENS Forum 2024

ePoster

NeuroDeblur: A novel software for fast deconvolution of large light-sheet, confocal, or bright-field microscopy stacks

Klaus Becker, Saiedeh Saghafi

FENS Forum 2024

ePoster

SiliFish 3.0: A software tool to model swimming behavior and its application for modeling swimming speed microcircuits in larval zebrafish

Emine Topcu, Tuan V. Bui

FENS Forum 2024