Topic spotlight
TopicWorld Wide

voxel

Discover seminars, jobs, and research tagged with voxel across World Wide.
8 curated items7 Seminars1 ePoster
Updated about 2 years ago
8 items · voxel
8 results
SeminarNeuroscience

NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping

Andy Jahn
fMRI Lab, University of Michigan
Oct 5, 2023

We will discuss a recent paper by Taylor et al. (2023): https://www.sciencedirect.com/science/article/pii/S1053811923002896. They discuss the merits of highlighting results instead of hiding them; that is, clearly marking which voxels and clusters pass a given significance threshold, but still highlighting sub-threshold results, with opacity proportional to the strength of the effect. They use this to illustrate how there in fact may be more agreement between researchers than previously thought, using the NARPS dataset as an example. By adopting a continuous, "highlighted" approach, it becomes clear that the majority of effects are in the same location and that the effect size is in the same direction, compared to an approach that only permits rejecting or not rejecting the null hypothesis. We will also talk about the implications of this approach for creating figures, detecting artifacts, and aiding reproducibility.

SeminarPhysics of LifeRecording

Crystallinity characterization of white matter in the human brain

Erin Teich
University of Pennsylvania
May 8, 2022

White matter microstructure underpins cognition and function in the human brain through the facilitation of neuronal communication, and the non-invasive characterization of this structure remains an elusive goal in the neuroscience community. Efforts to assess white matter microstructure are hampered by the sheer amount of information needed for characterization. Current techniques address this problem by representing white matter features with single scalars that are often not easy to interpret. Here, we address these issues by introducing tools from soft matter for the characterization of white matter microstructure. We investigate structure on a mesoscopic scale by analyzing its homogeneity and determining which regions of the brain are structurally homogeneous, or ``crystalline" in the context of materials science. We find that crystallinity is a reliable metric that varies across the brain along interpretable lines of anatomical difference. We also parcellate white matter into ``crystal grains," or contiguous sets of voxels of high structural similarity, and find overlap with other white matter parcellations. Our results provide new means of assessing white matter microstructure on multiple length scales, and open new avenues of future inquiry.

SeminarNeuroscience

Multi-modal biomarkers improve prediction of memory function in cognitively unimpaired older adults

Alexandra N. Trelle
Stanford
Mar 21, 2022

Identifying biomarkers that predict current and future cognition may improve estimates of Alzheimer’s disease risk among cognitively unimpaired older adults (CU). In vivo measures of amyloid and tau protein burden and task-based functional MRI measures of core memory mechanisms, such as the strength of cortical reinstatement during remembering, have each been linked to individual differences in memory in CU. This study assesses whether combining CSF biomarkers with fMRI indices of cortical reinstatement improves estimation of memory function in CU, assayed using three unique tests of hippocampal-dependent memory. Participants were 158 CU (90F, aged 60-88 years, CDR=0) enrolled in the Stanford Aging and Memory Study (SAMS). Cortical reinstatement was quantified using multivoxel pattern analysis of fMRI data collected during completion of a paired associate cued recall task. Memory was assayed by associative cued recall, a delayed recall composite, and a mnemonic discrimination task that involved discrimination between studied ‘target’ objects, novel ‘foil’ objects, and perceptually similar ‘lure’ objects. CSF Aβ42, Aβ40, and p-tau181 were measured with the automated Lumipulse G system (N=115). Regression analyses examined cross-sectional relationships between memory performance in each task and a) the strength of cortical reinstatement in the Default Network (comprised of posterior medial, medial frontal, and lateral parietal regions) during associative cued recall and b) CSF Aβ42/Aβ40 and p-tau181, controlling for age, sex, and education. For mnemonic discrimination, linear mixed effects models were used to examine the relationship between discrimination (d’) and each predictor as a function of target-lure similarity. Stronger cortical reinstatement was associated with better performance across all three memory assays. Age and higher CSF p-tau181 were each associated with poorer associative memory and a diminished improvement in mnemonic discrimination as target-lure similarity decreased. When combined in a single model, CSF p-tau181 and Default Network reinstatement strength, but not age, explained unique variance in associative memory and mnemonic discrimination performance, outperforming the single-modality models. Combining fMRI measures of core memory functions with protein biomarkers of Alzheimer’s disease significantly improved prediction of individual differences in memory performance in CU. Leveraging multimodal biomarkers may enhance future prediction of risk for cognitive decline.

SeminarNeuroscienceRecording

NMC4 Short Talk: Image embeddings informed by natural language improve predictions and understanding of human higher-level visual cortex

Aria Wang
Carnegie Mellon University
Nov 30, 2021

To better understand human scene understanding, we extracted features from images using CLIP, a neural network model of visual concept trained with supervision from natural language. We then constructed voxelwise encoding models to explain whole brain responses arising from viewing natural images from the Natural Scenes Dataset (NSD) - a large-scale fMRI dataset collected at 7T. Our results reveal that CLIP, as compared to convolution based image classification models such as ResNet or AlexNet, as well as language models such as BERT, gives rise to representations that enable better prediction performance - up to a 0.86 correlation with test data and an r-square of 0.75 - in higher-level visual cortex in humans. Moreover, CLIP representations explain distinctly unique variance in these higher-level visual areas as compared to models trained with only images or text. Control experiments show that the improvement in prediction observed with CLIP is not due to architectural differences (transformer vs. convolution) or to the encoding of image captions per se (vs. single object labels). Together our results indicate that CLIP and, more generally, multimodal models trained jointly on images and text, may serve as better candidate models of representation in human higher-level visual cortex. The bridge between language and vision provided by jointly trained models such as CLIP also opens up new and more semantically-rich ways of interpreting the visual brain.

SeminarOpen SourceRecording

Get more from your ISH brain slices with Stalefish

Seb James
Department of Psychology, The University of Sheffield
Oct 12, 2021

The standard method for staining structures in the brain is to slice the brain into 2D sections. Each slice is treated using a technique such as in-situ hybridization to examine the spatial expression of a particular molecule at a given developmental timepoint. Depending on the brain structures being studied, slices can be made coronally, sagitally, or at any angle that is thought to be optimal for analysis. However, assimilating the information presented in the 2D slice images to gain quantitiative and informative 3D expression patterns is challenging. Even if expression levels are presented as voxels, to give 3D expression clouds, it can be difficult to compare expression across individuals and analysing such data requires significant expertise and imagination. In this talk, I will describe a new approach to examining histology slices, in which the user defines the brain structure of interest by drawing curves around it on each slice in a set and the depth of tissue from which to sample expression. The sampled 'curves' are then assembled into a 3D surface, which can then be transformed onto a common reference frame for comparative analysis. I will show how other neuroscientists can obtain and use the tool, which is called Stalefish, to analyse their own image data with no (or minimal) changes to their slice preparation workflow.

ePoster

Examining speech disfluency through the analysis of grey matter densities in 5-year-olds using voxel-based morphometry

Ashmeet Jolly, Elmo Pulli, Henry Railo, Elina Mainela-Arnold, Jetro Tuulari

FENS Forum 2024