Voxel
voxel
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss a recent paper by Taylor et al. (2023): https://www.sciencedirect.com/science/article/pii/S1053811923002896. They discuss the merits of highlighting results instead of hiding them; that is, clearly marking which voxels and clusters pass a given significance threshold, but still highlighting sub-threshold results, with opacity proportional to the strength of the effect. They use this to illustrate how there in fact may be more agreement between researchers than previously thought, using the NARPS dataset as an example. By adopting a continuous, "highlighted" approach, it becomes clear that the majority of effects are in the same location and that the effect size is in the same direction, compared to an approach that only permits rejecting or not rejecting the null hypothesis. We will also talk about the implications of this approach for creating figures, detecting artifacts, and aiding reproducibility.
Crystallinity characterization of white matter in the human brain
White matter microstructure underpins cognition and function in the human brain through the facilitation of neuronal communication, and the non-invasive characterization of this structure remains an elusive goal in the neuroscience community. Efforts to assess white matter microstructure are hampered by the sheer amount of information needed for characterization. Current techniques address this problem by representing white matter features with single scalars that are often not easy to interpret. Here, we address these issues by introducing tools from soft matter for the characterization of white matter microstructure. We investigate structure on a mesoscopic scale by analyzing its homogeneity and determining which regions of the brain are structurally homogeneous, or ``crystalline" in the context of materials science. We find that crystallinity is a reliable metric that varies across the brain along interpretable lines of anatomical difference. We also parcellate white matter into ``crystal grains," or contiguous sets of voxels of high structural similarity, and find overlap with other white matter parcellations. Our results provide new means of assessing white matter microstructure on multiple length scales, and open new avenues of future inquiry.
Multi-modal biomarkers improve prediction of memory function in cognitively unimpaired older adults
Identifying biomarkers that predict current and future cognition may improve estimates of Alzheimer’s disease risk among cognitively unimpaired older adults (CU). In vivo measures of amyloid and tau protein burden and task-based functional MRI measures of core memory mechanisms, such as the strength of cortical reinstatement during remembering, have each been linked to individual differences in memory in CU. This study assesses whether combining CSF biomarkers with fMRI indices of cortical reinstatement improves estimation of memory function in CU, assayed using three unique tests of hippocampal-dependent memory. Participants were 158 CU (90F, aged 60-88 years, CDR=0) enrolled in the Stanford Aging and Memory Study (SAMS). Cortical reinstatement was quantified using multivoxel pattern analysis of fMRI data collected during completion of a paired associate cued recall task. Memory was assayed by associative cued recall, a delayed recall composite, and a mnemonic discrimination task that involved discrimination between studied ‘target’ objects, novel ‘foil’ objects, and perceptually similar ‘lure’ objects. CSF Aβ42, Aβ40, and p-tau181 were measured with the automated Lumipulse G system (N=115). Regression analyses examined cross-sectional relationships between memory performance in each task and a) the strength of cortical reinstatement in the Default Network (comprised of posterior medial, medial frontal, and lateral parietal regions) during associative cued recall and b) CSF Aβ42/Aβ40 and p-tau181, controlling for age, sex, and education. For mnemonic discrimination, linear mixed effects models were used to examine the relationship between discrimination (d’) and each predictor as a function of target-lure similarity. Stronger cortical reinstatement was associated with better performance across all three memory assays. Age and higher CSF p-tau181 were each associated with poorer associative memory and a diminished improvement in mnemonic discrimination as target-lure similarity decreased. When combined in a single model, CSF p-tau181 and Default Network reinstatement strength, but not age, explained unique variance in associative memory and mnemonic discrimination performance, outperforming the single-modality models. Combining fMRI measures of core memory functions with protein biomarkers of Alzheimer’s disease significantly improved prediction of individual differences in memory performance in CU. Leveraging multimodal biomarkers may enhance future prediction of risk for cognitive decline.
NMC4 Short Talk: Image embeddings informed by natural language improve predictions and understanding of human higher-level visual cortex
To better understand human scene understanding, we extracted features from images using CLIP, a neural network model of visual concept trained with supervision from natural language. We then constructed voxelwise encoding models to explain whole brain responses arising from viewing natural images from the Natural Scenes Dataset (NSD) - a large-scale fMRI dataset collected at 7T. Our results reveal that CLIP, as compared to convolution based image classification models such as ResNet or AlexNet, as well as language models such as BERT, gives rise to representations that enable better prediction performance - up to a 0.86 correlation with test data and an r-square of 0.75 - in higher-level visual cortex in humans. Moreover, CLIP representations explain distinctly unique variance in these higher-level visual areas as compared to models trained with only images or text. Control experiments show that the improvement in prediction observed with CLIP is not due to architectural differences (transformer vs. convolution) or to the encoding of image captions per se (vs. single object labels). Together our results indicate that CLIP and, more generally, multimodal models trained jointly on images and text, may serve as better candidate models of representation in human higher-level visual cortex. The bridge between language and vision provided by jointly trained models such as CLIP also opens up new and more semantically-rich ways of interpreting the visual brain.
Get more from your ISH brain slices with Stalefish
The standard method for staining structures in the brain is to slice the brain into 2D sections. Each slice is treated using a technique such as in-situ hybridization to examine the spatial expression of a particular molecule at a given developmental timepoint. Depending on the brain structures being studied, slices can be made coronally, sagitally, or at any angle that is thought to be optimal for analysis. However, assimilating the information presented in the 2D slice images to gain quantitiative and informative 3D expression patterns is challenging. Even if expression levels are presented as voxels, to give 3D expression clouds, it can be difficult to compare expression across individuals and analysing such data requires significant expertise and imagination. In this talk, I will describe a new approach to examining histology slices, in which the user defines the brain structure of interest by drawing curves around it on each slice in a set and the depth of tissue from which to sample expression. The sampled 'curves' are then assembled into a 3D surface, which can then be transformed onto a common reference frame for comparative analysis. I will show how other neuroscientists can obtain and use the tool, which is called Stalefish, to analyse their own image data with no (or minimal) changes to their slice preparation workflow.
“Understanding the Function and Dynamics of Organelles through Imaging”
Powerful new ways to image the internal structures and complex dynamics of cells are revolutionizing cell biology and bio-medical research. In this talk, I will focus on how emerging fluorescent technologies are increasing spatio-temporal resolution dramatically, permitting simultaneous multispectral imaging of multiple cellular components. In addition, results will be discussed from whole cell milling using Focused Ion Beam Electron Microscopy (FIB-SEM), which reconstructs the entire cell volume at 4 voxel resolution. Using these tools, it is now possible to begin constructing an “organelle interactome”, describing the interrelationships of different cellular organelles as they carry out critical functions. The same tools are also revealing new properties of organelles and their trafficking pathways, and how disruptions of their normal functions due to genetic mutations may contribute to important diseases.
Keynote talk: Imaging Interacting Organelles to Understand Metabolic Homeostasis
Powerful new ways to image the internal structures and complex dynamics of cells are revolutionizing cell biology and bio-medical research. In this talk, I will focus on how emerging fluorescent technologies are increasing spatio-temporal resolution dramatically, permitting simultaneous multispectral imaging of multiple cellular components. In addition, results will be discussed from whole cell milling using Focused Ion Beam Electron Microscopy (FIB-SEM), which reconstructs the entire cell volume at 4 voxel resolution. Using these tools, it is now possible to begin constructing an “organelle interactome”, describing the interrelationships of different cellular organelles as they carry out critical functions. The same tools are also revealing new properties of organelles and their trafficking pathways, and how disruptions of their normal functions due to genetic mutations may contribute to important diseases.
Examining speech disfluency through the analysis of grey matter densities in 5-year-olds using voxel-based morphometry
FENS Forum 2024