← Back

Mri Scans

Topic spotlight
TopicWorld Wide

MRI scans

Discover seminars, jobs, and research tagged with MRI scans across World Wide.
3 curated items3 Seminars
Updated about 2 years ago
3 items · MRI scans
3 results
SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 20, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarNeuroscience

Brain chart for the human lifespan

Richard Bethlehem
Director of Neuroimaging, Autism Research Centre, University of Cambridge, United Kingdom
Jan 18, 2022

Over the past few decades, neuroimaging has become a ubiquitous tool in basic research and clinical studies of the human brain. However, no reference standards currently exist to quantify individual differences in neuroimaging metrics over time, in contrast to growth charts for anthropometric traits such as height and weight. Here, we built an interactive resource to benchmark brain morphology, www.brainchart.io, derived from any current or future sample of magnetic resonance imaging (MRI) data. With the goal of basing these reference charts on the largest and most inclusive dataset available, we aggregated 123,984 MRI scans from 101,457 participants aged from 115 days post-conception through 100 postnatal years, across more than 100 primary research studies. Cerebrum tissue volumes and other global or regional MRI metrics were quantified by centile scores, relative to non-linear trajectories of brain structural changes, and rates of change, over the lifespan. Brain charts identified previously unreported neurodevelopmental milestones; showed high stability of individual centile scores over longitudinal assessments; and demonstrated robustness to technical and methodological differences between primary studies. Centile scores showed increased heritability compared to non-centiled MRI phenotypes, and provided a standardised measure of atypical brain structure that revealed patterns of neuroanatomical variation across neurological and psychiatric disorders. In sum, brain charts are an essential first step towards robust quantification of individual deviations from normative trajectories in multiple, commonly-used neuroimaging phenotypes. Our collaborative study proves the principle that brain charts are achievable on a global scale over the entire lifespan, and applicable to analysis of diverse developmental and clinical effects on human brain structure.

SeminarOpen SourceRecording

Introducing YAPiC: An Open Source tool for biologists to perform complex image segmentation with deep learning

Christoph Möhl
Core Research Facilities, German Center of Neurodegenerative Diseases (DZNE) Bonn.
Aug 26, 2021

Robust detection of biological structures such as neuronal dendrites in brightfield micrographs, tumor tissue in histological slides, or pathological brain regions in MRI scans is a fundamental task in bio-image analysis. Detection of those structures requests complex decision making which is often impossible with current image analysis software, and therefore typically executed by humans in a tedious and time-consuming manual procedure. Supervised pixel classification based on Deep Convolutional Neural Networks (DNNs) is currently emerging as the most promising technique to solve such complex region detection tasks. Here, a self-learning artificial neural network is trained with a small set of manually annotated images to eventually identify the trained structures from large image data sets in a fully automated way. While supervised pixel classification based on faster machine learning algorithms like Random Forests are nowadays part of the standard toolbox of bio-image analysts (e.g. Ilastik), the currently emerging tools based on deep learning are still rarely used. There is also not much experience in the community how much training data has to be collected, to obtain a reasonable prediction result with deep learning based approaches. Our software YAPiC (Yet Another Pixel Classifier) provides an easy-to-use Python- and command line interface and is purely designed for intuitive pixel classification of multidimensional images with DNNs. With the aim to integrate well in the current open source ecosystem, YAPiC utilizes the Ilastik user interface in combination with a high performance GPU server for model training and prediction. Numerous research groups at our institute have already successfully applied YAPiC for a variety of tasks. From our experience, a surprisingly low amount of sparse label data is needed to train a sufficiently working classifier for typical bioimaging applications. Not least because of this, YAPiC has become the "standard weapon” for our core facility to detect objects in hard-to-segement images. We would like to present some use cases like cell classification in high content screening, tissue detection in histological slides, quantification of neural outgrowth in phase contrast time series, or actin filament detection in transmission electron microscopy.