data sharing
Latest
The future of neuropsychology will be open, transdiagnostic, and FAIR - why it matters and how we can get there
Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles (Wilkinson et al., 2016), standards (Gorgolewski et al., 2016), and ontologies (Poldrack et al., 2011), as well as practices of meta-analysis (Yarkoni et al., 2011; Dockès et al., 2020) and data sharing (Gorgolewski et al., 2015). However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed (Weber et al., 2010; Siddiqiet al 2022). On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain (Price, 2018). Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce (e.g., Larivière et al., 2021). We are working with clinicians, neuroimagers, and software developers to develop an open source platform for the storage, sharing, synthesis and meta-analysis of human clinical data to the service of the clinical and cognitive neuroscience community so that the future of neuropsychology can be transdiagnostic, open, and FAIR. We call it neurocausal (https://neurocausal.github.io).
Algorithmic advances in face matching: Stability of tests in atypical groups
Face matching tests have traditionally been developed to assess human face perception in the neurotypical range, but methods that underlie their development often make it difficult for these measures to be applied in atypical populations (developmental prosopagnosics, super recognizers) due to unadjusted difficulty. We have recently presented the development of the Oxford Face Matching Test, a measure that bases individual item-difficulty on algorithmically derived similarity of presented stimuli. The measure seems useful as it can be given online or in-laboratory, has good discriminability and high test-retest reliability in the neurotypical groups. In addition, it has good validity in separating atypical groups at either of the spectrum ends. In this talk, I examine the stability of the OFMT and other traditionally used measures in atypical groups. On top of the theoretical significance of determining whether reliability of tests is equivalent in atypical population, this is an important question because of the practical concerns of retesting the same participants across different lab groups. Theoretical and practical implications for further test development and data sharing are discussed.
data sharing coverage
2 items