ePoster

A new open-source non-verbal semantic memory test reveals intracranial topography of category representation

Da Zhang, Edwina Tran, Jet Vonk, Kaitlin Casaletto, Maria Luisa Gorno-Tempini, Edward Chang, Jon Kleen
FENS Forum 2024(2024)
Messe Wien Exhibition & Congress Center, Vienna, Austria

Conference

FENS Forum 2024

Messe Wien Exhibition & Congress Center, Vienna, Austria

Resources

Authors & Affiliations

Da Zhang, Edwina Tran, Jet Vonk, Kaitlin Casaletto, Maria Luisa Gorno-Tempini, Edward Chang, Jon Kleen

Abstract

The neural mechanisms of semantic processing are difficult to delineate from lexical, acoustic, and speech-motor neural activity since most relevant neuropsychological tests require language. Furthermore, distinct signal properties of different cortical semantic processing regions may require a more comprehensive analysis of neural signal components than standard (BOLD, high gamma) metrics. We developed a Visual-based Semantic Association Task (ViSAT), evolved from predecessors (Pyramids & Palm Trees test; Camels & Cactus Test) and scaled to 100 unique trials using real-life color pictures that avoid prior confounds. We crowdsourced normative data (Amazon MTurk workers; N=54), validated ViSAT in healthy controls (N=24), and administered it to patients with epilepsy including five undergoing intracranial recordings. We evaluated trial image similarity with a deep neural network (ResNet-18 on image2vec embedding), and single-electrode semantic encoding using a novel AI transformer approach. Iterative task development steps achieved a high percent-consensus for ViSAT trials (>90% consensus in 91% of trials), and trial image visual features (non-semantic) did not drive consensus answers (p=0.577). Electrode locations exhibiting semantic category encoding (95% C.I. permutation test) preliminarily localized to middle/superior temporal and middle basal temporal/temporal pole regions (“manipulable” category), and lateral/posterior basal temporal cortex and right-side angular gyrus (“animal” category). In conclusion, a non-verbal task revealed multiple cortical regions encoding semantic information through diverse neural signal properties. Since the ViSAT task is newly validated, repeatable/longitudinal (four balanced trial sets), provides accuracy and reaction time metrics, and is free, we propose it as a strong alternative to previous non-verbal semantic memory tasks.

Unique ID: fens-24/open-source-non-verbal-semantic-memory-09e38338