Latest

SeminarPsychology

Enhancing Qualitative Coding with Large Language Models: Potential and Challenges

Kim Uittenhove & Olivier Mucchiut
AFC Lab / University of Lausanne
Oct 16, 2023

Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.

SeminarPsychology

The future of neuropsychology will be open, transdiagnostic, and FAIR - why it matters and how we can get there

Valentina Borghesani
University of Geneva
Nov 30, 2022

Cognitive neuroscience has witnessed great progress since modern neuroimaging embraced an open science framework, with the adoption of shared principles (Wilkinson et al., 2016), standards (Gorgolewski et al., 2016), and ontologies (Poldrack et al., 2011), as well as practices of meta-analysis (Yarkoni et al., 2011; Dockès et al., 2020) and data sharing (Gorgolewski et al., 2015). However, while functional neuroimaging data provide correlational maps between cognitive functions and activated brain regions, its usefulness in determining causal link between specific brain regions and given behaviors or functions is disputed (Weber et al., 2010; Siddiqiet al 2022). On the contrary, neuropsychological data enable causal inference, highlighting critical neural substrates and opening a unique window into the inner workings of the brain (Price, 2018). Unfortunately, the adoption of Open Science practices in clinical settings is hampered by several ethical, technical, economic, and political barriers, and as a result, open platforms enabling access to and sharing clinical (meta)data are scarce (e.g., Larivière et al., 2021). We are working with clinicians, neuroimagers, and software developers to develop an open source platform for the storage, sharing, synthesis and meta-analysis of human clinical data to the service of the clinical and cognitive neuroscience community so that the future of neuropsychology can be transdiagnostic, open, and FAIR. We call it neurocausal (https://neurocausal.github.io).

SeminarPsychologyRecording

What the fluctuating impact of memory load on decision speed tells us about thinking

Candice C. Morey
Cardiff University
Jul 1, 2021

Previous work with complex memory span tasks, in which simple choice decisions are imposed between presentations of to-be-remembered items, shows that these secondary tasks reduce memory span. It is less clear how reconfiguring and maintaining various amounts of information affects decision speeds. We documented and replicated a non-linear effect of accumulating memory items on concurrent processing judgments, showing that this pattern could be made linear by introducing "lead-in" processing judgments prior to the start of the memory list. With lead-in judgments, there was a large and consistent cost to processing response times with the introduction of the first item in the memory list, which increased gradually per item as the list accumulated. However, once presentation of the list was complete, decision responses sped rapidly: within a few seconds, decisions were at least as fast as when remembering a single item. This pattern of findings is inconsistent with the idea that merely holding information in mind conflicts with attention-demanding decision tasks. Instead, it is possible that reconfiguring memory items for responding provokes conflict between memory and processing in complex span tasks.

storage coverage

5 items

Seminar5
Domain spotlight

Explore how storage research is advancing inside Psychology.

Visit domain