Attention
attention
Latest
Using Fast Periodic Visual Stimulation to measure cognitive function in dementia
Fast periodic visual stimulation (FPVS) has emerged as a promising tool for assessing cognitive function in individuals with dementia. This technique leverages electroencephalography (EEG) to measure brain responses to rapidly presented visual stimuli, offering a non-invasive and objective method for evaluating a range of cognitive functions. Unlike traditional cognitive assessments, FPVS does not rely on behavioural responses, making it particularly suitable for individuals with cognitive impairment. In this talk I will highlight a series of studies that have demonstrated its ability to detect subtle deficits in recognition memory, visual processing and attention in dementia patients using EEG in the lab, at home and in clinic. The method is quick, cost-effective, and scalable, utilizing widely available EEG technology. FPVS holds significant potential as a functional biomarker for early diagnosis and monitoring of dementia, paving the way for timely interventions and improved patient outcomes.
A Novel Neurophysiological Approach to Assessing Distractibility within the General Population
Vulnerability to distraction varies across the general population and significantly affects one’s capacity to stay focused on and successfully complete the task at hand, whether at school, on the road, or at work. In this talk, I will begin by discussing how distractibility is typically assessed in the literature and introduce our innovative ERP approach to measuring it. Since distractibility is a cardinal symptom of ADHD, I will introduce its most widely used paper-and-pencil screening tool for the general population as external validation. Following that, I will present the Load Theory of Attention and explain how we used perceptual load to test the reliability of our neural marker of distractibility. Finally, I will highlight potential future applications of this marker in clinical and educational settings.
Neural makers of lapses in attention during sustained ‘real-world’ task performance
Lapses in attention are ubiquitous and, unfortunately, the cause of many tragic accidents. One potential solution may be to develop assistance systems which can use objective, physiological signals to monitor attention levels and predict a lapse in attention before it occurs. As it stands, it is unclear which physiological signals are the most reliable markers of inattention, and even less is known about how reliably they will work in a more naturalistic setting. My project aims to address these questions across two experiments: a lab-based experiment and a more ‘real-world’ experiment. In this talk I will present the findings from my lab experiment, in which we combined EEG and pupillometry to detect markers of inattention during two computerised sustained attention tasks. I will then present the methods for my second, more ‘naturalistic’ experiment in which we use the same methods (EEG and pupillometry) to examine whether these markers can still be extracted from noisier data.
Gender, trait anxiety and attentional processing in healthy young adults: is a moderated moderation theory possible?
Three studies conducted in the context of PhD work (UNIL) aimed at proving evidence to address the question of potential gender differences in trait anxiety and executive control biases on behavioral efficacy. In scope were male and female non-clinical samples of adult young age that performed non-emotional tasks assessing basic attentional functioning (Attention Network Test – Interactions, ANT-I), sustained attention (Test of Variables of Attention, TOVA), and visual recognition abilities (Object in Location Recognition Task, OLRT). Results confirmed the intricate nature of the relationship between gender and health trait anxiety through the lens of their impact on processing efficacy in males and females. The possibility of a gendered theory in trait anxiety biases is discussed.
Enhancing Qualitative Coding with Large Language Models: Potential and Challenges
Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.
Diagnosing dementia using Fastball neurocognitive assessment
Fastball is a novel, fast, passive biomarker of cognitive function, that uses cheap, scalable electroencephalography (EEG) technology. It is sensitive to early dementia; language, education, effort and anxiety independent and can be used in any setting including patients’ homes. It can capture a range of cognitive functions including semantic memory, recognition memory, attention and visual function. We have shown that Fastball is sensitive to cognitive dysfunction in Alzheimer’s disease and Mild Cognitive Impairment, with data collected in patients’ homes using low-cost portable EEG. We are now preparing for significant scale-up and the validation of Fastball in primary and secondary care.
Understanding and Mitigating Bias in Human & Machine Face Recognition
With the increasing use of automated face recognition (AFR) technologies, it is important to consider whether these systems not only perform accurately, but also equitability or without “bias”. Despite rising public, media, and scientific attention to this issue, the sources of bias in AFR are not fully understood. This talk will explore how human cognitive biases may impact our assessments of performance differentials in AFR systems and our subsequent use of those systems to make decisions. We’ll also show how, if we adjust our definition of what a “biased” AFR algorithm looks like, we may be able to create algorithms that optimize the performance of a human+algorithm team, not simply the algorithm itself.
The role of top-down mechanisms in gaze perception
Humans, as a social species, have an increased ability to detect and perceive visual elements involved in social exchanges, such as faces and eyes. The gaze, in particular, conveys information crucial for social interactions and social cognition. Researchers have hypothesized that in order to engage in dynamic face-to-face communication in real time, our brains must quickly and automatically process the direction of another person's gaze. There is evidence that direct gaze improves face encoding and attention capture and that direct gaze is perceived and processed more quickly than averted gaze. These results are summarized as the "direct gaze effect". However, in the recent literature, there is evidence to suggest that the mode of visual information processing modulates the direct gaze effect. In this presentation, I argue that top-down processing, and specifically the relevance of eye features to the task, promotes the early preferential processing of direct versus indirect gaze. On the basis of several recent evidences, I propose that low task relevance of eye features will prevent differences in eye direction processing between gaze directions because its encoding will be superficial. Differential processing of direct and indirect gaze will only occur when the eyes are relevant to the task. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of typical ERP markers such as P1, N170 or P300, we will measure lateralized ERPs (lERPS) such as lateralized N170 and N2pc, which are markers of early face encoding and attentional deployment respectively. I hypothesize that the relevance of the eye feature task is crucial in the direct gaze effect and propose to revisit previous studies, which had questioned the existence of the direct gaze effect. This claim will be illustrate with different past studies and recent preliminary data of my lab. Overall, I propose a systematic evaluation of the role of top-down processing in early direct gaze perception in order to understand the impact of context on gaze perception and, at a larger scope, on social cognition.
The diachronic account of attentional selectivity
Many models of attention assume that attentional selection takes place at a specific moment in time which demarcates the critical transition from pre-attentive to attentive processing of sensory input. We argue that this intuitively appealing account is not only inaccurate, but has led to substantial conceptual confusion (to the point where some attention researchers offer to abandon the term ‘attention’ altogether). As an alternative, we offer a “diachronic” framework that describes attentional selectivity as a process that unfolds over time. Key to this view is the concept of attentional episodes, brief periods of intense attentional amplification of sensory representations that regulate access to working memory and response-related processes. We describe how attentional episodes are linked to earlier attentional mechanisms and to recurrent processing at the neural level. We present data showing that multiple sequential events can be involuntarily encoded in working memory when they appear during the same attentional episode, whether they are relevant or not. We also discuss the costs associated with processing multiple events within a single episode. Finally, we argue that breaking down the dichotomy between pre-attentive and attentive (as well as early vs. late selection) offers new solutions to old problems in attention research that have never been resolved. It can provide a unified and conceptually coherent account of the network of cognitive and neural processes that produce the goal-directed selectivity in perceptual processing that is commonly referred to as “attention”.
What are the consequences of directing attention within working memory?
The role of attention in working memory remains controversial, but there is some agreement on the notion that the focus of attention holds mnemonic representations in a privileged state of heightened accessibility in working memory, resulting in better memory performance for items that receive focused attention during retention. Closely related, representations held in the focus of attention are often observed to be robust and protected from degradation caused by either perceptual interference (e.g., Makovski & Jiang, 2007; van Moorselaar et al., 2015) or decay (e.g., Barrouillet et al., 2007). Recent findings indicate, however, that representations held in the focus of attention are particularly vulnerable to degradation, and thus, appear to be particularly fragile rather than robust (e.g., Hitch et al., 2018; Hu et al., 2014). The present set of experiments aims at understanding the apparent paradox of information in the focus of attention having a protected vs. vulnerable status in working memory. To that end, we examined the effect of perceptual interference on memory performance for information that was held within vs. outside the focus of attention, across different ways of bringing items in the focus of attention and across different time scales.
What the fluctuating impact of memory load on decision speed tells us about thinking
Previous work with complex memory span tasks, in which simple choice decisions are imposed between presentations of to-be-remembered items, shows that these secondary tasks reduce memory span. It is less clear how reconfiguring and maintaining various amounts of information affects decision speeds. We documented and replicated a non-linear effect of accumulating memory items on concurrent processing judgments, showing that this pattern could be made linear by introducing "lead-in" processing judgments prior to the start of the memory list. With lead-in judgments, there was a large and consistent cost to processing response times with the introduction of the first item in the memory list, which increased gradually per item as the list accumulated. However, once presentation of the list was complete, decision responses sped rapidly: within a few seconds, decisions were at least as fast as when remembering a single item. This pattern of findings is inconsistent with the idea that merely holding information in mind conflicts with attention-demanding decision tasks. Instead, it is possible that reconfiguring memory items for responding provokes conflict between memory and processing in complex span tasks.
Perception, attention, visual working memory, and decision making: The complete consort dancing together
Our research investigates how processes of attention, visual working memory (VWM), and decision-making combine to translate perception into action. Within this framework, the role of VWM is to form stable representations of transient stimulus events that allow them to be identified by a decision process, which we model as a diffusion process. In psychophysical tasks, we find the capacity of VWM is well defined by a sample-size model, which attributes changes in VWM precision with set-size to differences in the number evidence samples recruited to represent stimuli. In the first part of the talk, I review evidence for the sample-size model and highlight the model's strengths: It provides a parameter-free characterization of the set-size effect; it has plausible neural and cognitive interpretations; an attention-weighted version of the model accounts for the power-law of VWM, and it accounts for the selective updating of VWM in multiple-look experiments. In the second part of the talk, I provide a characterization of the theoretical relationship between two-choice and continuous-outcome decision tasks using the circular diffusion model, highlighting their common features. I describe recent work characterizing the joint distributions of decision outcomes and response times in continuous-outcome tasks using the circular diffusion model and show that the model can clearly distinguish variable-precision and simple mixture models of the evidence entering the decision process. The ability to distinguish these kinds of processes is central to current VWM studies.
Beyond visual search: studying visual attention with multitarget visual foraging tasks
Visual attention refers to a set of processes allowing selection of relevant and filtering out of irrelevant information in the visual environment. A large amount of research on visual attention has involved visual search paradigms, where observers are asked to report whether a single target is present or absent. However, recent studies have revealed that these classic single-target visual search tasks only provide a snapshot of how attention is allocated in the visual environment, and that multitarget visual foraging tasks may capture the dynamics visual attention more accurately. In visual foraging, observers are asked to select multiple instances of multiple target types, as fast as they can. A critical question in foraging research concerns the factors driving the next target selection. Most likely, this would require two steps: (1) identifying a set of candidates for the next selection, and (2) selecting the best option among these candidates. After having briefly described the advantage of visual foraging over visual search, I will review recent visual foraging studies testing the influence of several manipulations (e.g., target crypticity, number of items, selection modality) on foraging behaviour. Overall, these studies revealed that the next target selection during visual foraging is determined by the competition between three factors: target value, target proximity, and priming of features. I will explain how the analysis of individual differences in foraging behaviour can provide important information, with the idea that individuals show by-default internal biases toward value, proximity and priming that determine their search strategy and behaviour.
The problem of power in single-case neuropsychology
Case-control comparisons are a gold standard method for diagnosing and researching neuropsychological deficits and dissociations at the single-case level. These statistical tests, developed by John Crawford and collaborators, provide quantitative criteria for the classical concepts of deficit, dissociation and double-dissociation. Much attention has been given to the control of Type I (false positive) errors for these tests, but far less to the avoidance of Type II (false negative) errors; that is, to statistical power. I will describe the origins and limits of statistical power for case-control comparisons, showing that there are hard upper limits on power, which have important implications for the design and interpretation of single-case studies. My aim is to stimulate discussion of the inferential status of single-case neuropsychological evidence, particularly with respect to contemporary ideals of open science and study preregistration.
Markers of brain connectivity and sleep-dependent restoration: basic research and translation into clinical populations
The human brain is a heavily interconnected structure giving rise to complex functions. While brain functionality is mostly revealed during wakefulness, the sleeping brain might offer another view into physiological and pathological brain connectivity. Furthermore, there is a large body of evidence supporting that sleep mediates plastic changes in brain connectivity. Although brain plasticity depends on environmental input which is provided in the waking state, disconnection during sleep might be necessary for integrating new into existing information and at the same time restoring brain efficiency. In this talk, I will present structural, molecular, and electrophysiological markers of brain connectivity and sleep-dependent restoration that we have evaluated using Magnetic Resonance Imaging and electroencephalography in a healthy population. In a second step, I will show how we translated the gained findings into two clinical populations in which alterations in brain connectivity have been described, the neuropsychiatric disorder attention-deficit/hyperactivity disorder (ADHD) and the neurologic disorder thalamic ischemic stroke.
attention coverage
15 items