← Back

Selective Attention

Topic spotlight
TopicWorld Wide

selective attention

Discover seminars, jobs, and research tagged with selective attention across World Wide.
16 curated items9 Seminars4 Positions3 ePosters
Updated 1 day ago
16 items · selective attention
16 results
Position

Prof. Edmund Wascher / Dr. Laura-Isabelle Klatt

Leibniz Research Centre for Working Environment and Human Factors
Dortmund, Germany
Dec 5, 2025

We are seeking to fill a fully funded PhD position (75% TV-L 13 state employees salary scheme) in cognitive neuroscience. The successful applicant will contribute to a project, investigating selective attention and working memory processes in a multisensory context. In particular, we are interested in how the auditory and the visual system interact during the deployment of attention in multisensory environments and how audio-visual information is integrated. To answer those research questions, we primarily use the EEG in combination with cutting edge analysis methods (e.g., multivariate pattern classification). Beyond that, the application of eye-tracking or (functional) MRI is possible within the project. Your responsibilities will include conducting (EEG-) experiments, data analysis, preparation of manuscripts for publication in peer-reviewed journals, as well as presentation of scientific results at (inter-)national conferences. Official job ad: https://www.ifado.de/ifadoen/careers/current-job-offers/#job3

Position

Thomas Nowotny

University of Sussex
University of Sussex, Falmer, Brighton BN1 9QJ
Dec 5, 2025

You will develop novel active AI algorithms that are inspired by the rapid and robust learning of insects within the £1.2m EPSRC International Centre to Centre Collaboration project: “ActiveAI: active learning and selective attention for rapid, robust and efficient AI.” and will work in collaboration with the University of Sheffield and world-leading neuroscientists in Australia. Your primary role will be to develop a new class of ActiveAI controllers for problems in which insects excel but deep learning methods struggle. These problems have one or more of the following characteristics: (i) learning must occur rapidly, (ii) learning samples are few or costly, (iii) computational resources are limited, and (iv) the learning problem changes over time. Insects deal with such complex tasks robustly despite limited computational power because learning is an active process emerging from the interaction of evolved brains, bodies and behaviours. Through a virtuous cycle of modelling and experiments, you will develop insect-inspired models, in which behavioural strategies and specialised sensors actively structure sensory input while selective attention drives learning to the most salient information. The cycle of modelling and experiments will be achieved through field work in both Sussex and Australia.

SeminarNeuroscience

Intrinsic timescales in the visual cortex change with selective attention and reflect spatial connectivity

Attempto Prize Awardee I Roxana Zeraati
IMPRS-MMFD, MPI-BC & University of Tübingen
Oct 30, 2024
SeminarNeuroscienceRecording

How bilingualism modulates the neural mechanisms of selective attention

Mirjana Bozic
Department of Psychology, University of Cambridge
Jan 31, 2022

Learning and using multiple languages places considerable demands on our cognitive system, and has been shown to modulate the mechanisms of selective attention in both children and adults. Yet the nature of these adaptive changes is still not entirely clear. One possibility is that bilingualism boosts the capacity for selective attention; another is that it leads to a different distribution of this finite resource, aimed at supporting optimal performance under the increased processing demands. I will present a series of studies investigating the nature of modifications of selective attention in bilingualism. Using behavioural and neuroimaging techniques, our data confirm that bilingualism modifies the neural mechanisms of selective attention even in the absence of behavioural differences between monolinguals and bilinguals. They further suggest that, instead of enhanced attentional capacity, these neuroadaptive modifications appear to reflect its redistribution, arguably aimed at economising the available resources to support optimal behavioural performance.

SeminarNeuroscienceRecording

How do we find what we are looking for? The Guided Search 6.0 model

Jeremy Wolfe
Harvard
Oct 25, 2021

The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.

SeminarNeuroscience

Understanding the role of prediction in sensory encoding

Jason Mattingley
Monash Biomedical Imaging
Jul 28, 2021

At any given moment the brain receives more sensory information than it can use to guide adaptive behaviour, creating the need for mechanisms that promote efficient processing of incoming sensory signals. One way in which the brain might reduce its sensory processing load is to encode successive presentations of the same stimulus in a more efficient form, a process known as neural adaptation. Conversely, when a stimulus violates an expected pattern, it should evoke an enhanced neural response. Such a scheme for sensory encoding has been formalised in predictive coding theories, which propose that recent experience establishes expectations in the brain that generate prediction errors when violated. In this webinar, Professor Jason Mattingley will discuss whether the encoding of elementary visual features is modulated when otherwise identical stimuli are expected or unexpected based upon the history of stimulus presentation. In humans, EEG was employed to measure neural activity evoked by gratings of different orientations, and multivariate forward modelling was used to determine how orientation selectivity is affected for expected versus unexpected stimuli. In mice, two-photon calcium imaging was used to quantify orientation tuning of individual neurons in the primary visual cortex to expected and unexpected gratings. Results revealed enhanced orientation tuning to unexpected visual stimuli, both at the level of whole-brain responses and for individual visual cortex neurons. Professor Mattingley will discuss the implications of these findings for predictive coding theories of sensory encoding. Professor Jason Mattingley is a Laureate Fellow and Foundation Chair in Cognitive Neuroscience at The University of Queensland. His research is directed toward understanding the brain processes that support perception, selective attention and decision-making, in health and disease.

SeminarNeuroscienceRecording

Active sleep in flies: the dawn of consciousness

Bruno van Swinderen
University of Queensland
Jul 18, 2021

The brain is a prediction machine. Yet the world is never entirely predictable, for any animal. Unexpected events are surprising and this typically evokes prediction error signatures in animal brains. In humans such mismatched expectations are often associated with an emotional response as well. Appropriate emotional responses are understood to be important for memory consolidation, suggesting that valence cues more generally constitute an ancient mechanism designed to potently refine and generalize internal models of the world and thereby minimize prediction errors. On the other hand, abolishing error detection and surprise entirely is probably also maladaptive, as this might undermine the very mechanism that brains use to become better prediction machines. This paradoxical view of brain functions as an ongoing tug-of-war between prediction and surprise suggests a compelling new way to study and understand the evolution of consciousness in animals. I will present approaches to studying attention and prediction in the tiny brain of the fruit fly, Drosophila melanogaster. I will discuss how an ‘active’ sleep stage (termed rapid eye movement – REM – sleep in mammals) may have evolved in the first animal brains as a mechanism for optimizing prediction in motile creatures confronted with constantly changing environments. A role for REM sleep in emotional regulation could thus be better understood as an ancient sleep function that evolved alongside selective attention to maintain an adaptive balance between prediction and surprise. This view of active sleep has some interesting implications for the evolution of subjective awareness and consciousness.

SeminarNeuroscienceRecording

Decoding the neural processing of speech

Tobias Reichenbach
Friedrich-Alexander-University
Mar 22, 2021

Understanding speech in noisy backgrounds requires selective attention to a particular speaker. Humans excel at this challenging task, while current speech recognition technology still struggles when background noise is loud. The neural mechanisms by which we process speech remain, however, poorly understood, not least due to the complexity of natural speech. Here we describe recent progress obtained through applying machine-learning to neuroimaging data of humans listening to speech in different types of background noise. In particular, we develop statistical models to relate characteristic features of speech such as pitch, amplitude fluctuations and linguistic surprisal to neural measurements. We find neural correlates of speech processing both at the subcortical level, related to the pitch, as well as at the cortical level, related to amplitude fluctuations and linguistic structures. We also show that some of these measures allow to diagnose disorders of consciousness. Our findings may be applied in smart hearing aids that automatically adjust speech processing to assist a user, as well as in the diagnosis of brain disorders.

SeminarNeuroscienceRecording

The When, Where and What of visual memory formation

Brad Wyble
Pennsylvania State University
Feb 11, 2021

The eyes send a continuous stream of about two million nerve fibers to the brain, but only a fraction of this information is stored as visual memories. This talk will detail three neurocomputational models that attempt an understanding how the visual system makes on-the-fly decisions about how to encode that information. First, the STST family of models (Bowman & Wyble 2007; Wyble, Potter, Bowman & Nieuwenstein 2011) proposes mechanisms for temporal segmentation of continuous input. The conclusion of this work is that the visual system has mechanisms for rapidly creating brief episodes of attention that highlight important moments in time, and also separates each episode from temporally adjacent neighbors to benefit learning. Next, the RAGNAROC model (Wyble et al. 2019) describes a decision process for determining the spatial focus (or foci) of attention in a spatiotopic field and the neural mechanisms that provide enhancement of targets and suppression of highly distracting information. This work highlights the importance of integrating behavioral and electrophysiological data to provide empirical constraints on a neurally plausible model of spatial attention. The model also highlights how a neural circuit can make decisions in a continuous space, rather than among discrete alternatives. Finally, the binding pool (Swan & Wyble 2014; Hedayati, O’Donnell, Wyble in Prep) provides a mechanism for selectively encoding specific attributes (i.e. color, shape, category) of a visual object to be stored in a consolidated memory representation. The binding pool is akin to a holographic memory system that layers representations of select latent representations corresponding to different attributes of a given object. Moreover, it can bind features into distinct objects by linking them to token placeholders. Future work looks toward combining these models into a coherent framework for understanding the full measure of on-the-fly attentional mechanisms and how they improve learning.

SeminarNeuroscience

How do we find what we are looking for? The Guided Search 6.0 model

Jeremy Wolfe
Harvard Medical School
Feb 3, 2021

The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of Guided Search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. In GS6, the priority map is a dynamic attentional landscape that evolves over the course of search. In part, this is because the visual field is inhomogeneous. Part 3: That inhomogeneity imposes spatial constraints on search that described by three types of “functional visual field” (FVFs): (1) a resolution FVF, (2) an FVF governing exploratory eye movements, and (3) an FVF governing covert deployments of attention. Finally, in Part 4, we will consider that the internal representation of the search target, the “search template” is really two templates: a guiding template and a target template. Put these pieces together and you have GS6.

ePoster

A mechanism for selective attention in biophysically realistic Daleian spiking neural networks

Martin Vinck, Marius Schneider

COSYNE 2025

ePoster

Decoding of selective attention to speech in CI patients using linear and non-linear methods

Constantin Jehn, Adrian Kossmann, Anja Hahne, Niki Vavatzanidis, Tobias Reichenbach

FENS Forum 2024

ePoster

The frontal areas involved in nonspatial visual selective attention and retrieval in the human brain

Kristina Drudik, Michael Petrides

FENS Forum 2024