workflow
Latest
OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis
In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.
Virtual Brain Twins for Brain Medicine and Epilepsy
Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.
Predicting appearances
Visual appearance is an important factor in product and lighting design, and depends on the combination of form, materials, context, and lighting. Such design spaces are seemingly endless and full of optical as well as perceptual interactions. A systematic approach to navigate this space and to predict the resulting appearance can support designers in their iterative work flow, avoiding losing time on trial and error and offering understanding of the optical and perceptual effects. It should also allow artistic freedom to interactively vary the design, and enable easy communication to team members and clients. I will present examples of such approaches via canonical sets, simplifying design spaces in perception-based manners to arrive at intuitive presentations, with a focus on light(ing) design and material appearance.
The wonders and complexities of brain microstructure: Enabling biomedical engineering studies combining imaging and models
Brain microstructure plays a key role in driving the transport of drug molecules directly administered to the brain tissue as in Convection-Enhanced Delivery procedures. This study reports the first systematic attempt to characterize the cytoarchitecture of commissural, long association and projection fiber, namely: the corpus callosum, the fornix and the corona radiata. Ovine samples from three different subjects have been imaged using scanning electron microscope combined with focused ion beam milling. Particular focus has been given to the axons. For each tract, a 3D reconstruction of relatively large volumes (including a significant number of axons) has been performed. Namely, outer axonal ellipticity, outer axonal cross-sectional area and its relative perimeter have been measured. This study [1] provides useful insight into the fibrous organization of the tissue that can be described as composite material presenting elliptical tortuous tubular fibers, leading to a workflow to enable accurate simulations of drug delivery which include well-resolved microstructural features. As a demonstration of the use of these imaging and reconstruction techniques, our research analyses the hydraulic permeability of two white matter (WM) areas (corpus callosum and fornix) whose three-dimensional microstructure was reconstructed starting from the acquisition of the electron microscopy images. Considering that the white matter structure is mainly composed of elongated and parallel axons we computed the permeability along the parallel and perpendicular directions using computational fluid dynamics [2]. The results show a statistically significant difference between parallel and perpendicular permeability, with a ratio about 2 in both the white matter structures analysed, thus demonstrating their anisotropic behaviour. This is in line with the experimental results obtained using perfusion of brain matter [3]. Moreover, we find a significant difference between permeability in corpus callosum and fornix, which suggests that also the white matter heterogeneity should be considered when modelling drug transport in the brain. Our findings, that demonstrate and quantify the anisotropic and heterogeneous character of the white matter, represent a fundamental contribution not only for drug delivery modelling but also for shedding light on the interstitial transport mechanisms in the extracellular space. These and many other discoveries will be discussed during the talk." "1. https://www.researchsquare.com/article/rs-686577/v1, 2. https://www.pnas.org/content/118/36/e2105328118, 3. https://ieeexplore.ieee.org/abstract/document/9198110
Interpreting the Mechanisms and Meaning of Sensorimotor Beta Rhythms with the Human Neocortical Neurosolver (HNN) Neural Modeling Software
Electro- and magneto-encephalography (EEG/MEG) are the leading methods to non-invasively record human neural dynamics with millisecond temporal resolution. However, it can be extremely difficult to infer the underlying cellular and circuit level origins of these macro-scale signals without simultaneous invasive recordings. This limits the translation of E/MEG into novel principles of information processing, or into new treatment modalities for neural pathologies. To address this need, we developed the Human Neocortical Neurosolver (HNN: https://hnn.brown/edu ), a new user-friendly neural modeling tool designed to help researchers and clinicians interpret human imaging data. A unique feature of HNN’s model is that it accounts for the biophysics generating the primary electric currents underlying such data, so simulation results are directly comparable to source localized data. HNN is being constructed with workflows of use to study some of the most commonly measured E/MEG signals including event related potentials, and low frequency brain rhythms. In this talk, I will give an overview of this new tool and describe an application to study the origin and meaning of 15-29Hz beta frequency oscillations, known to be important for sensory and motor function. Our data showed that in primary somatosensory cortex these oscillations emerge as transient high power ‘events’. Functionally relevant differences in averaged power reflected a difference in the number of high-power beta events per trial (“rate”), as opposed to changes in event amplitude or duration. These findings were consistent across detection and attention tasks in human MEG, and in local field potentials from mice performing a detection task. HNN modeling led to a new theory on the circuit origin of such beta events and suggested beta causally impacts perception through layer specific recruitment of cortical inhibition, with support from invasive recordings in animal models and high-resolution MEG in humans. In total, HNN provides an unpresented biophysically principled tool to link mechanism to meaning of human E/MEG signals.
Digitization as a driving force for collaboration in neuroscience
Many of the collaborations we encounter in our scientific careers are centered on a common idea that can be associated with certain resources, such as a dataset, an algorithm, or a model. All partners in a collaboration need to develop a common understanding of these resources, and need to be able to access them in a simple and unambiguous manner in order to avoid incorrect conclusions especially in highly cross-disciplinary contexts. While digital computers have entered to assist scientific workflows in experiment and simulation for many decades, the high degree of heterogeneity in the field had led to a scattered landscape of highly customized, lab-internal solutions to organizing and managing the resources on a project-by-project basis. Only with the availability of modern technologies such as the semantic web, platforms for collaborative coding or the development of data standards overarching different disciplines, we have tools at our disposal to make resources increasingly more accessible, understandable, and usable. However, without overarching standardization efforts and adaptation of such technologies to the workflows and needs of individual researchers, their adoption by the neuroscience community will be impeded. From the perspective of computational neuroscience, which is inherently dependent on leveraging data and methods across the field of neuroscience for inspiration and validation, I will outline my view on past and present developments towards a more rigorous use of digital resources and how they improved collaboration, and introduce emerging initiatives to support this process in the future (e.g., EBRAINS http://ebrains.eu, NFDI-Neuro http://www.nfdi-neuro.de).
Reproducible EEG from raw data to publication figures
In this talk I will present recent developments in data sharing, organization, and analyses that allow to build fully reproducible workflows. First, I will present the Brain Imaging Data structure and discuss how this allows to build workflows, showing some new tools to read/import/create studies from EEG data structured that way. Second, I will present several newly developed tools for reproducible pre-processing and statistical analyses. Although it does take some extra effort, I will argue that it largely feasible to make most EEG data analysis fully reproducible.
Biomedical Image and Genetic Data Analysis with machine learning; applications in neurology and oncology
In this presentation I will show the opportunities and challenges of big data analytics with AI techniques in medical imaging, also in combination with genetic and clinical data. Both conventional machine learning techniques, such as radiomics for tumor characterization, and deep learning techniques for studying brain ageing and prognosis in dementia, will be addressed. Also the concept of deep imaging, a full integration of medical imaging and machine learning, will be discussed. Finally, I will address the challenges of how to successfully integrate these technologies in daily clinical workflow.
Arkitekt: Streaming analysis and real-time bioimage workflows for microscopy
FENS Forum 2024
A multistep analysis workflow for the classification of cortical LFP events
FENS Forum 2024
Streamlining electrophysiology data analysis: A Python-based workflow for efficient integration and processing
FENS Forum 2024
Towards FAIR neuroscience: An efficient workflow for sharing and integrating data
FENS Forum 2024
Automated Research Workflows for Pose Estimation
Neuromatch 5
workflow coverage
13 items