← Back

Eye Movements

Topic spotlight
TopicWorld Wide

eye movements

Discover seminars, jobs, and research tagged with eye movements across World Wide.
34 curated items28 Seminars4 ePosters2 Positions
Updated 1 day ago
34 items · eye movements
34 results
Position

Dr Katrin Franke

Center for Integrative Neuroscience, Tübingen University
Tübingen, Germany
Dec 5, 2025

In January 2021, the collaborative research center “Robust Vision – Inference Principles and Neural Mechanisms” (CRC 1233) started its second funding period. Funded by the German Research Foundation (DFG), a group of more than 20 PIs that are jointly addressing the question why biological visual systems are so remarkably robust by combining expertise in experimental and computational neuroscience, as well as in machine learning and computer vision. As part of this interdisciplinary research initiative, applications are sought for one PhD student position on the topic of “Impacts of eye movements on visual processing: from retina to perception” in the group of Katrin Franke at the Institute for Ophthalmic Research and the Center for Integrative Neuroscience (CIN). Eye movements result in substantial retinal image shifts. This creates continuous spatio-temporal modulations of neural activity, starting from the photoreceptors and all the way to downstream areas. In this project, we will investigate the impacts of eye movements on visual processing in the mammalian early visual system. With this, we expect to identify neural mechanisms underlying robust encoding of visual information in the face of continuous image shifts and to delineate the role of early sensory encoding in eye-movement related perceptual phenomena. The PhD student will be responsible for the acquisition of in vitro two-photon calcium and glutamate recordings from the mouse and primate retina and subsequent data analysis. The project will be jointly conducted with a PhD student based in Ziad Hafed’s lab who will focus on electrical recordings of neural activity in primate superior colliculus and V1. The candidate for this position requires strong experimental skills, prior programming experience and a background in neuroscience. Experience with two-photon imaging will be considered an asset. Importantly, the candidate requires the ability to work in an interdisciplinary team. The candidate will be working at the CIN and the Institute for Ophthalmic Research under the supervision of Katrin Franke, in close collaboration with Ziad Hafed’s and Thomas Euler’s groups at Tübingen University. The PhD student will be enrolled in one of the PhD programs of the renowned Graduate Training Center of Neuroscience at Tübingen University. The position is immediately available, with funding for 3 years (with the option on a 1-year extension). We offer employment with a salary and social benefits based on the collective agreement for public service employees in the academic and science sector, TV-L. The CRC promotes gender equality and therefore particularly encourages female scientists to apply. Preferential status will be given to handicapped persons, if equally qualified. Tübingen is a vibrant university city in the south of Germany. Besides the CIN, Tübingen is also home to the Hertie Institute for Clinical Brain Research and several institutes of the Max Planck Society, among others. This allows for a tremendous exposure to the latest advances in neuroscience, vision/robotics, human-computer interaction, brain-computer interfaces, etc. There are also opportunities for collaborative projects across labs/institutes. Applications should include a CV, a statement of research motivation and experience, and the names of at least two referees. Please compile your application in one single PDF-file and email it to katrin.franke@uni-tuebingen.de.

Position

Prof Laura Busse

LMU Munich
Munich, Germany
Dec 5, 2025

2 PhD positions as part of interdisciplinary collaborations are available in Laura Busse’s lab at the Faculty of Biology of the LMU Munich and Thomas Euler’s lab at the Center for Integrative Neuroscience in Tübingen. The fully funded positions are part of the DFG-funded Collaborative Research Center Robust vision: Inference Principles and neural mechanisms. In the project, we will explore the visual input received by the mouse visual system under natural conditions and study how such input is processed along key stages of the early visual system. The project continues from Qiu et al. (2020, bioRxiv) and will include opportunities for performing recordings of the visual input encountered by freely behaving mice under naturalistic conditions, statistical analysis of the recorded video material, quantitative assessment of behavior, and measurements (2P calcium imaging / electrophysiology) of neural responses from mouse retina, visual thalamus and primary visual cortex in response to naturalistic movies. One of the positions will be place in Thomas Euler’s lab (U Tuebingen) with a focus on retinal aspects of the project. A complementary PhD position in Laura Busse’s lab (LMU Munich), with a focus on central vision aspects, will closely collaborate on the development of the recording hardware and the software framework for data analysis and modelling. Both positions offer a thriving scientific environment, structured PhD programs and numerous opportunities for networking and exchange. Interested candidates are welcome to establish contact via email to thomas.euler@cin.uni-tuebingen.de and busse@bio.lmu.de. More information about the labs can be found here https://eulerlab.de/ and https://visioncircuitslab.org/ For applications to Thomas Euler’s position within the project, see further instructions on the lab’s webpage (https://eulerlab.de/positions/). For applications to Laura Busse’s position within the project, please visit the LMU Graduate School of Systemic Neuroscience (GSN, http://www.gsn.uni-muenchen.de). The deadline for applications is February 15.

SeminarNeuroscience

The Unconscious Eye: What Involuntary Eye Movements Reveal About Brain Processing

Yoram Bonneh
Bar-Ilan
Jun 9, 2025
SeminarNeuroscienceRecording

Altered grid-like coding in early blind people and the role of vision in conceptual navigation

Roberto Bottini
CIMeC, University of Trento
Mar 5, 2025
SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 11, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarNeuroscience

Mind Perception and Behaviour: A Study of Quantitative and Qualitative Effects

Alan Kingstone
University of British Columbia
Nov 18, 2024
SeminarNeuroscience

Imagining and seeing: two faces of prosopagnosia

Jason Barton
University of British Columbia
Nov 4, 2024
SeminarNeuroscience

Sensory Consequences of Visual Actions

Martin Rolfs
Humboldt-Universität zu Berlin
Dec 7, 2023

We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.

SeminarNeuroscienceRecording

From following dots to understanding scenes

Alexander Göttker
Giessen
May 1, 2023
SeminarNeuroscienceRecording

Direction-selective ganglion cells in primate retina: a subcortical substrate for reflexive gaze stabilization?

Teresa Puthussery
University of California, Berkeley
Jan 22, 2023

To maintain a stable and clear image of the world, our eyes reflexively follow the direction in which a visual scene is moving. Such gaze stabilization mechanisms reduce image blur as we move in the environment. In non-primate mammals, this behavior is initiated by ON-type direction-selective ganglion cells (ON-DSGCs), which detect the direction of image motion and transmit signals to brainstem nuclei that drive compensatory eye movements. However, ON-DSGCs have not yet been functionally identified in primates, raising the possibility that the visual inputs that drive this behavior instead arise in the cortex. In this talk, I will present molecular, morphological and functional evidence for identification of an ON-DSGC in macaque retina. The presence of ON-DSGCs highlights the need to examine the contribution of subcortical retinal mechanisms to normal and aberrant gaze stabilization in the developing and mature visual system. More generally, our findings demonstrate the power of a multimodal approach to study sparsely represented primate RGC types.

SeminarNeuroscience

Real-world scene perception and search from foveal to peripheral vision

Antje Nuthmann
Kiel University
Oct 23, 2022

A high-resolution central fovea is a prominent design feature of human vision. But how important is the fovea for information processing and gaze guidance in everyday visual-cognitive tasks? Following on from classic findings for sentence reading, I will present key results from a series of eye-tracking experiments in which observers had to search for a target object within static or dynamic images of real-world scenes. Gaze-contingent scotomas were used to selectively deny information processing in the fovea, parafovea, or periphery. Overall, the results suggest that foveal vision is less important and peripheral vision is more important for scene perception and search than previously thought. The importance of foveal vision was found to depend on the specific requirements of the task. Moreover, the data support a central-peripheral dichotomy in which peripheral vision selects and central vision recognizes.

SeminarNeuroscienceRecording

A neural mechanism for terminating decisions

Gabriel Stine
Shadlen Lab, Columbia University
Sep 20, 2022

The brain makes decisions by accumulating evidence until there is enough to stop and choose. Neural mechanisms of evidence accumulation are well established in association cortex, but the site and mechanism of termination is unknown. Here, we elucidate a mechanism for termination by neurons in the primate superior colliculus. We recorded simultaneously from neurons in lateral intraparietal cortex (LIP) and the superior colliculus (SC) while monkeys made perceptual decisions, reported by eye-movements. Single-trial analyses revealed distinct dynamics: LIP tracked the accumulation of evidence on each decision, and SC generated one burst at the end of the decision, occasionally preceded by smaller bursts. We hypothesized that the bursts manifest a threshold mechanism applied to LIP activity to terminate the decision. Focal inactivation of SC produced behavioral effects diagnostic of an impaired threshold sensor, requiring a stronger LIP signal to terminate a decision. The results reveal the transformation from deliberation to commitment.

SeminarPsychology

Perception during visual disruptions

Grace Edwards & Lina Teichmann
NIH/NIMH, Laboratory of Brain & Cognition
Jun 12, 2022

Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption.   Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.

SeminarNeuroscience

Perception during visual disruptions

Grace Edwards and Lina Teichmann
National Institute of Mental Health, Laboratory of Brain and Cognition, U.S. Department of Health and Human Services.
Jun 12, 2022

Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption. Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.

SeminarNeuroscience

The dynamics of temporal attention

Rachel Denison
Boston University
Nov 23, 2021

Selection is the hallmark of attention: processing improves for attended items but is relatively impaired for unattended items. It is well known that visual spatial attention changes sensory signals and perception in this selective fashion. In the work I will present, we asked whether and how attentional selection happens across time. First, our experiments revealed that voluntary temporal attention (attention to specific points in time) is selective, resulting in perceptual tradeoffs across time. Second, we measured small eye movements called microsaccades and found that directing voluntary temporal attention increases the stability of the eyes in anticipation of an attended stimulus. Third, we developed a computational model of dynamic attention, which proposes specific mechanisms underlying temporal attention and its selectivity. Lastly, I will mention how we are testing predictions of the model with MEG. Altogether, this research shows how precisely timed voluntary attention helps manage inherent limits in visual processing across short time intervals, advancing our understanding of attention as a dynamic process.

SeminarNeuroscienceRecording

Space and its computational challenges

Jennifer Groh
Duke University
Nov 17, 2021

How our senses work both separately and together involves rich computational problems. I will discuss the spatial and representational problems faced by the visual and auditory system, focusing on two issues. 1. How does the brain correct for discrepancies in the visual and auditory spatial reference frames? I will describe our recent discovery of a novel type of otoacoustic emission, the eye movement related eardrum oscillation, or EMREO (Gruters et al, PNAS 2018). 2. How does the brain encode more than one stimulus at a time? I will discuss evidence for neural time-division multiplexing, in which neural activity fluctuates across time to allow representations to encode more than one simultaneous stimulus (Caruso et al, Nat Comm 2018). These findings all emerged from experimentally testing computational models regarding spatial representations and their transformations within and across sensory pathways. Further, they speak to several general problems confronting modern neuroscience such as the hierarchical organization of brain pathways and limits on perceptual/cognitive processing.

SeminarNeuroscienceRecording

The role of high- and low-level factors in smooth pursuit of predictable and random motions

Eileen Kowler
Rutgers
Oct 18, 2021

Smooth pursuit eye movements are among our most intriguing motor behaviors. They are able to keep the line of sight on smoothly moving targets with little or no overt effort or deliberate planning, and they can respond quickly and accurately to changes in the trajectory of motion of targets. Nevertheless, despite these seeming automatic characteristics, pursuit is highly sensitive to high-level factors, such as the choices made about attention, or beliefs about the direction of upcoming motion. Investigators have struggled for decades with the problem of incorporating both high- and low-level processes into a single coherent model. This talk will present an overview of the current state of efforts to incorporate high- and low-level influences, as well as new observations that add to our understanding of both types of influences. These observations (in contrast to much of the literature) focus on the directional properties of pursuit. Studies will be presented that show: (1) the direction of smooth pursuit made to pursue fields of noisy random dots depends on the relative reliability of the sensory signal and the expected motion direction; (2) smooth pursuit shows predictive responses that depend on the interpretation of cues that signal an impending collision; and (3) smooth pursuit during a change in target direction displays kinematic properties consistent with the well-known two-thirds power law. Implications for incorporating high- and low-level factors into the same framework will be discussed.

SeminarNeuroscience

The role of motion in localizing objects

Patrick Cavanagh
Department of Psychological and Brain Research, Dartmouth College
Sep 12, 2021

Everything we see has a location. We know where things are before we know what they are. But how do we know where things are? Receptive fields in the visual system specify location but neural delays lead to serious errors whenever targets or eyes are moving. Motion may be the problem here but motion can also be the solution, correcting for the effects of delays and eye movements. To demonstrate this, I will present results from three motion illusions where perceived location differs radically from physical location. These help understand how and where position is coded. We first look at the effects of a target’s simple forward motion on its perceived location. Second, we look at perceived location of a target that has internal motion as well as forward motion. The two directions combine to produce an illusory path. This “double-drift” illusion strongly affects perceived position but, surprisingly, not eye movements or attention. Even more surprising, fMRI shows that the shifted percept does not emerge in the visual cortex but is seen instead in the frontal lobes. Finally, we report that a moving frame also shifts the perceived positions of dots flashed within it. Participants report the dot positions relative to the frame, as if the frame were not moving. These frame-induced position effects suggest a link to visual stability where we see a steady world despite massive displacements during saccades. These motion-based effects on perceived location lead to new insights concerning how and where position is coded in the brain.

SeminarNeuroscienceRecording

What are you looking at? Adventures in human gaze behaviour

Benjamin De Haas
Giessen University
Jun 28, 2021
SeminarNeuroscienceRecording

Neural mechanisms of active vision in the marmoset monkey

Jude Mitchell
University of Rochester
May 11, 2021

Human vision relies on rapid eye movements (saccades) 2-3 times every second to bring peripheral targets to central foveal vision for high resolution inspection. This rapid sampling of the world defines the perception-action cycle of natural vision and profoundly impacts our perception. Marmosets have similar visual processing and eye movements as humans, including a fovea that supports high-acuity central vision. Here, I present a novel approach developed in my laboratory for investigating the neural mechanisms of visual processing using naturalistic free viewing and simple target foraging paradigms. First, we establish that it is possible to map receptive fields in the marmoset with high precision in visual areas V1 and MT without constraints on fixation of the eyes. Instead, we use an off-line correction for eye position during foraging combined with high resolution eye tracking. This approach allows us to simultaneously map receptive fields, even at the precision of foveal V1 neurons, while also assessing the impact of eye movements on the visual information encoded. We find that the visual information encoded by neurons varies dramatically across the saccade to fixation cycle, with most information localized to brief post-saccadic transients. In a second study we examined if target selection prior to saccades can predictively influence how foveal visual information is subsequently processed in post-saccadic transients. Because every saccade brings a target to the fovea for detailed inspection, we hypothesized that predictive mechanisms might prime foveal populations to process the target. Using neural decoding from laminar arrays placed in foveal regions of area MT, we find that the direction of motion for a fixated target can be predictively read out from foveal activity even before its post-saccadic arrival. These findings highlight the dynamic and predictive nature of visual processing during eye movements and the utility of the marmoset as a model of active vision. Funding sources: NIH EY030998 to JM, Life Sciences Fellowship to JY

SeminarNeuroscience

Understanding "why": The role of causality in cognition

Tobias Gerstenberg
Stanford University
Apr 27, 2021

Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.

SeminarPsychology

Exploring Memories of Scenes

Nico Broers
Westfälische Wilhelms-Universität Münster
Mar 24, 2021

State-of-the-art machine vision models can predict human recognition memory for complex scenes with astonishing accuracy. In this talk I present work that investigated how memorable scenes are actually remembered and experienced by human observers. We found that memorable scenes were recognized largely based on recollection of specific episodic details but also based on familiarity for an entire scene. I thus highlight current limitations in machine vision models emulating human recognition memory, with promising opportunities for future research. Moreover, we were interested in what observers specifically remember about complex scenes. We thus considered the functional role of eye-movements as a window into the content of memories, particularly when observers recollected specific information about a scene. We found that when observers formed a memory representation that they later recollected (compared to scenes that only felt familiar), the overall extent of exploration was broader, with a specific subset of fixations clustered around later to-be-recollected scene content, irrespective of the memorability of a scene. I discuss the critical role that our viewing behavior plays in visual memory formation and retrieval and point to potential implications for machine vision models predicting the content of human memories.

SeminarNeuroscienceRecording

A no-report paradigm reveals that face cells multiplex consciously perceived and suppressed stimuli

Janis Hesse
California Institute of Technology
Feb 25, 2021

Having conscious experience is arguably the most important reason why it matters to us whether we are alive or dead. A powerful paradigm to identify neural correlates of consciousness is binocular rivalry, wherein a constant visual stimulus evokes a varying conscious percept. It has recently been suggested that activity modulations observed during rivalry may represent the act of report rather than the conscious percept itself. Here, we performed single-unit recordings from face patches in macaque inferotemporal (IT) cortex using a novel no-report paradigm in which the animal’s conscious percept was inferred from eye movements. These experiments reveal two new results concerning the neural correlates of consciousness. First, we found that high proportions of IT neurons represented the conscious percept even without active report. Using high-channel recordings, including a new 128-channel Neuropixels-like probe, we were able to decode the conscious percept on single trials. Second, we found that even on single trials, modulation to rivalrous stimuli was weaker than that to unambiguous stimuli, suggesting that cells may encode not only the conscious percept but also the suppressed stimulus. To test this hypothesis, we varied the identity of the suppressed stimulus during binocular rivalry; we found that indeed, we could decode not only the conscious percept but also the suppressed stimulus from neural activity. Moreover, the same cells that were strongly modulated by the conscious percept also tended to be strongly modulated by the suppressed stimulus. Together, our findings indicate that (1) IT cortex possesses a true neural correlate of consciousness even in the absence of report, and (2) this correlate consists of a population code wherein single cells multiplex representation of the conscious percept and veridical physical stimulus, rather than a subset of cells perfectly reflecting consciousness.

SeminarNeuroscience

How do we find what we are looking for? The Guided Search 6.0 model

Jeremy Wolfe
Harvard Medical School
Feb 3, 2021

The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of Guided Search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. In GS6, the priority map is a dynamic attentional landscape that evolves over the course of search. In part, this is because the visual field is inhomogeneous. Part 3: That inhomogeneity imposes spatial constraints on search that described by three types of “functional visual field” (FVFs): (1) a resolution FVF, (2) an FVF governing exploratory eye movements, and (3) an FVF governing covert deployments of attention. Finally, in Part 4, we will consider that the internal representation of the search target, the “search template” is really two templates: a guiding template and a target template. Put these pieces together and you have GS6.

SeminarNeuroscienceRecording

Global visual salience of competing stimuli

Alex Hernandez-Garcia
Université de Montréal
Dec 9, 2020

Current computational models of visual salience accurately predict the distribution of fixations on isolated visual stimuli. It is not known, however, whether the global salience of a stimulus, that is its effectiveness in the competition for attention with other stimuli, is a function of the local salience or an independent measure. Further, do task and familiarity with the competing images influence eye movements? In this talk, I will present the analysis of a computational model of the global salience of natural images. We trained a machine learning algorithm to learn the direction of the first saccade of participants who freely observed pairs of images. The pairs balanced the combinations of new and already seen images, as well as task and task-free trials. The coefficients of the model provided a reliable measure of the likelihood of each image to attract the first fixation when seen next to another image, that is their global salience. For example, images of close-up faces and images containing humans were consistently looked first and were assigned higher global salience. Interestingly, we found that global salience cannot be explained by the feature-driven local salience of images, the influence of task and familiarity was rather small and we reproduced the previously reported left-sided bias. This computational model of global salience allows to analyse multiple other aspects of human visual perception of competing stimuli. In the talk, I will also present our latest results from analysing the saccadic reaction time as a function of the global salience of the pair of images.

SeminarNeuroscienceRecording

Exploring fine detail: The interplay of attention, oculomotor behavior and visual perception in the fovea

Martina Poletti
University of Rochester
Dec 8, 2020

Outside the foveola, visual acuity and other visual functions gradually deteriorate with increasing eccentricity. Humans compensate for these limitations by relying on a tight link between perception and action; rapid gaze shifts (saccades) occur 2-3 times every second, separating brief “fixation” intervals in which visual information is acquired and processed. During fixation, however, the eye is not immobile. Small eye movements incessantly shift the image on the retina even when the attended stimulus is already foveated, suggesting a much deeper coupling between visual functions and oculomotor activity. Thanks to a combination of techniques allowing for high-resolution recordings of eye position, retinal stabilization, and accurate gaze localization, we examined how attention and eye movements are controlled at this scale. We have shown that during fixation, visual exploration of fine spatial detail unfolds following visuomotor strategies similar to those occurring at a larger scale. This behavior compensates for non-homogenous visual capabilities within the foveola and is finely controlled by attention, which facilitates processing at selected foveal locations. Ultimately, the limits of high acuity vision are greatly influenced by the spatiotemporal modulations introduced by fixational eye movements. These findings reveal that, contrary to common intuition, placing a stimulus within the foveola is necessary but not sufficient for high visual acuity; fine spatial vision is the outcome of an orchestrated synergy of motor, cognitive, and attentional factors.

SeminarNeuroscienceRecording

Time is of the essence: active sensing in natural vision reveals novel mechanisms of perception

Pedro Maldonado, PhD
Departamento de Neurociencia y BNI, Facultad de Medicina, Universidad de Chile
Nov 29, 2020

n natural vision, active vision refers to the changes in visual input resulting from self-initiated eye movements. In this talk, I will present studies that show that the stimulus-related activity during active vision differs substantially from that occurring during classical flashed-stimuli paradigms. Our results uncover novel and efficient mechanisms that improve visual perception. In a general way, the nervous system appears to engage in sensory modulation mechanisms, precisely timed to self-initiated stimulus changes, thus coordinating neural activity across different cortical areas and serving as a general mechanism for the global coordination of visual perception.

SeminarNeuroscienceRecording

Exploration and expectation: between attention and eye movements

Shlomit Yuval Greenberg
Tel Aviv University
Nov 9, 2020
SeminarNeuroscienceRecording

A Rare Visuospatial Disorder

Aimee Dollman
University of Cape Town
Aug 25, 2020

Cases with visuospatial abnormalities provide opportunities for understanding the underlying cognitive mechanisms. Three cases of visual mirror-reversal have been reported: AH (McCloskey, 2009), TM (McCloskey, Valtonen, & Sherman, 2006) and PR (Pflugshaupt et al., 2007). This research reports a fourth case, BS -- with focal occipital cortical dysgenesis -- who displays highly unusual visuospatial abnormalities. They initially produced mirror reversal errors similar to those of AH, who -- like the patient in question -- showed a selective developmental deficit. Extensive examination of BS revealed phenomena such as: mirror reversal errors (sometimes affecting only parts of the visual fields) in both horizontal and vertical planes; subjective representation of visual objects and words in distinct left and right visual fields; subjective duplication of objects of visual attention (not due to diplopia); uncertainty regarding the canonical upright orientation of everyday objects; mirror reversals during saccadic eye movements on oculomotor tasks; and failure to integrate visual with other sensory inputs (e.g., they feel themself moving backwards when visual information shows they are moving forward). Fewer errors are produced under conditions of certain visual variables. These and other findings have led the researchers to conclude that BS draws upon a subjective representation of visual space that is structured phenomenally much as it is anatomically in early visual cortex (i.e., rotated through 180 degrees, split into left and right fields, etc.). Despite this, BS functions remarkably well in their everyday life, apparently due to extensive compensatory mechanisms deployed at higher (executive) processing levels beyond the visual modality.

SeminarNeuroscienceRecording

Visual perception and fixational eye movements: microsaccades, drift and tremor

Yasuto Tanaka
Paris Miki Inc. and Osaka University
Jul 6, 2020
ePoster

Orienting eye movements during REM sleep

COSYNE 2022

ePoster

Orienting eye movements during REM sleep

COSYNE 2022

ePoster

Beyond task-optimized neural models: constraints from eye movements during navigation

Akis Stavropoulos, Kaushik Lakshminarasimhan, Dora Angelaki

COSYNE 2023

ePoster

Exploring gaze movements in lampreys: Insights into vertebrate neural mechanisms for stabilizing and goal-oriented eye movements​​​​​

Marta Barandela, Carmen Núñez-González, Cecilia Jiménez-López, Manuel A. Pombal, Juan Pérez-Fernández

FENS Forum 2024