← Back

Fixation

Topic spotlight
TopicWorld Wide

fixation

Discover seminars, jobs, and research tagged with fixation across World Wide.
14 curated items11 Seminars3 ePosters
Updated almost 2 years ago
14 items · fixation
14 results
SeminarNeuroscience

Sensory Consequences of Visual Actions

Martin Rolfs
Humboldt-Universität zu Berlin
Dec 7, 2023

We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.

SeminarNeuroscience

Euclidean coordinates are the wrong prior for primate vision

Gary Cottrell
University of California, San Diego (UCSD)
May 9, 2023

The mapping from the visual field to V1 can be approximated by a log-polar transform. In this domain, scale is a left-right shift, and rotation is an up-down shift. When fed into a standard shift-invariant convolutional network, this provides scale and rotation invariance. However, translation invariance is lost. In our model, this is compensated for by multiple fixations on an object. Due to the high concentration of cones in the fovea with the dropoff of resolution in the periphery, fully 10 degrees of visual angle take up about half of V1, with the remaining 170 degrees (or so) taking up the other half. This layout provides the basis for the central and peripheral pathways. Simulations with this model closely match human performance in scene classification, and competition between the pathways leads to the peripheral pathway being used for this task. Remarkably, in spite of the property of rotation invariance, this model can explain the inverted face effect. We suggest that the standard method of using image coordinates is the wrong prior for models of primate vision.

SeminarNeuroscience

Demystifying the richness of visual perception

Ruth Rosenholtz
MIT
Oct 19, 2021

Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.

SeminarNeuroscienceRecording

Neural mechanisms of active vision in the marmoset monkey

Jude Mitchell
University of Rochester
May 11, 2021

Human vision relies on rapid eye movements (saccades) 2-3 times every second to bring peripheral targets to central foveal vision for high resolution inspection. This rapid sampling of the world defines the perception-action cycle of natural vision and profoundly impacts our perception. Marmosets have similar visual processing and eye movements as humans, including a fovea that supports high-acuity central vision. Here, I present a novel approach developed in my laboratory for investigating the neural mechanisms of visual processing using naturalistic free viewing and simple target foraging paradigms. First, we establish that it is possible to map receptive fields in the marmoset with high precision in visual areas V1 and MT without constraints on fixation of the eyes. Instead, we use an off-line correction for eye position during foraging combined with high resolution eye tracking. This approach allows us to simultaneously map receptive fields, even at the precision of foveal V1 neurons, while also assessing the impact of eye movements on the visual information encoded. We find that the visual information encoded by neurons varies dramatically across the saccade to fixation cycle, with most information localized to brief post-saccadic transients. In a second study we examined if target selection prior to saccades can predictively influence how foveal visual information is subsequently processed in post-saccadic transients. Because every saccade brings a target to the fovea for detailed inspection, we hypothesized that predictive mechanisms might prime foveal populations to process the target. Using neural decoding from laminar arrays placed in foveal regions of area MT, we find that the direction of motion for a fixated target can be predictively read out from foveal activity even before its post-saccadic arrival. These findings highlight the dynamic and predictive nature of visual processing during eye movements and the utility of the marmoset as a model of active vision. Funding sources: NIH EY030998 to JM, Life Sciences Fellowship to JY

SeminarNeuroscience

“Circuit mechanisms for flexible behaviors”

Takaki Komiyama,
UC San Diego
Apr 7, 2021

Animals constantly modify their behavior through experience. Flexible behavior is key to our ability to adapt to the ever-changing environment. My laboratory is interested in studying the activity of neuronal ensembles in behaving animals, and how it changes with learning. We have recently set up a paradigm where mice learn to associate sensory information (two different odors) to motor outputs (lick vs no-lick) under head-fixation. We combined this with two-photon calcium imaging, which can monitor the activity of a microcircuit of many tens of neurons simultaneously from a small area of the brain. Imaging the motor cortex during the learning of this task revealed neurons with diverse task-related response types. Intriguingly, different response types were spatially intermingled; even immediately adjacent neurons often had very different response types. As the mouse learned the task under the microscope, the activity coupling of neurons with similar response types specifically increased, even though they are intermingled with neurons with dissimilar response types. This suggests that intermingled subnetworks of functionally-related neurons form in a learning-related way, an observation that became possible with our cutting-edge technique combining imaging and behavior. We are working to extend this study. How plastic are neuronal microcircuits during other forms of learning? How plastic are they in other parts of the brain? What are the cellular and molecular mechanisms of the microcircuit plasticity? Are the observed activity and plasticity required for learning? How does the activity of identified individual neurons change over days to weeks? We are asking these questions, combining a variety of techniques including in vivo two-photon imaging, optogenetics, electrophysiology, genetics and behavior.

SeminarPsychology

Exploring Memories of Scenes

Nico Broers
Westfälische Wilhelms-Universität Münster
Mar 24, 2021

State-of-the-art machine vision models can predict human recognition memory for complex scenes with astonishing accuracy. In this talk I present work that investigated how memorable scenes are actually remembered and experienced by human observers. We found that memorable scenes were recognized largely based on recollection of specific episodic details but also based on familiarity for an entire scene. I thus highlight current limitations in machine vision models emulating human recognition memory, with promising opportunities for future research. Moreover, we were interested in what observers specifically remember about complex scenes. We thus considered the functional role of eye-movements as a window into the content of memories, particularly when observers recollected specific information about a scene. We found that when observers formed a memory representation that they later recollected (compared to scenes that only felt familiar), the overall extent of exploration was broader, with a specific subset of fixations clustered around later to-be-recollected scene content, irrespective of the memorability of a scene. I discuss the critical role that our viewing behavior plays in visual memory formation and retrieval and point to potential implications for machine vision models predicting the content of human memories.

SeminarNeuroscienceRecording

Global visual salience of competing stimuli

Alex Hernandez-Garcia
Université de Montréal
Dec 9, 2020

Current computational models of visual salience accurately predict the distribution of fixations on isolated visual stimuli. It is not known, however, whether the global salience of a stimulus, that is its effectiveness in the competition for attention with other stimuli, is a function of the local salience or an independent measure. Further, do task and familiarity with the competing images influence eye movements? In this talk, I will present the analysis of a computational model of the global salience of natural images. We trained a machine learning algorithm to learn the direction of the first saccade of participants who freely observed pairs of images. The pairs balanced the combinations of new and already seen images, as well as task and task-free trials. The coefficients of the model provided a reliable measure of the likelihood of each image to attract the first fixation when seen next to another image, that is their global salience. For example, images of close-up faces and images containing humans were consistently looked first and were assigned higher global salience. Interestingly, we found that global salience cannot be explained by the feature-driven local salience of images, the influence of task and familiarity was rather small and we reproduced the previously reported left-sided bias. This computational model of global salience allows to analyse multiple other aspects of human visual perception of competing stimuli. In the talk, I will also present our latest results from analysing the saccadic reaction time as a function of the global salience of the pair of images.

SeminarNeuroscienceRecording

Exploring fine detail: The interplay of attention, oculomotor behavior and visual perception in the fovea

Martina Poletti
University of Rochester
Dec 8, 2020

Outside the foveola, visual acuity and other visual functions gradually deteriorate with increasing eccentricity. Humans compensate for these limitations by relying on a tight link between perception and action; rapid gaze shifts (saccades) occur 2-3 times every second, separating brief “fixation” intervals in which visual information is acquired and processed. During fixation, however, the eye is not immobile. Small eye movements incessantly shift the image on the retina even when the attended stimulus is already foveated, suggesting a much deeper coupling between visual functions and oculomotor activity. Thanks to a combination of techniques allowing for high-resolution recordings of eye position, retinal stabilization, and accurate gaze localization, we examined how attention and eye movements are controlled at this scale. We have shown that during fixation, visual exploration of fine spatial detail unfolds following visuomotor strategies similar to those occurring at a larger scale. This behavior compensates for non-homogenous visual capabilities within the foveola and is finely controlled by attention, which facilitates processing at selected foveal locations. Ultimately, the limits of high acuity vision are greatly influenced by the spatiotemporal modulations introduced by fixational eye movements. These findings reveal that, contrary to common intuition, placing a stimulus within the foveola is necessary but not sufficient for high visual acuity; fine spatial vision is the outcome of an orchestrated synergy of motor, cognitive, and attentional factors.

SeminarNeuroscienceRecording

Visual perception and fixational eye movements: microsaccades, drift and tremor

Yasuto Tanaka
Paris Miki Inc. and Osaka University
Jul 6, 2020
ePoster

Adult hippocampal neurogenesis, the post-mortem delay, and prolonged aldehyde fixation: The enemies within

Marta Gallardo Caballero, Carla Rodríguez Moreno, Laura Álvarez Méndez, Júlia Terreros Roncal, Miguel Flor García, Elena Moreno Jiménez, Alberto Rábano, María Llorens Martín

FENS Forum 2024

ePoster

Effect of different fixation protocols on human brain tissue preservation and immunogenicity

Bibiána Török, Katalin Zsófia Tóth, Cecília Szekeres-Paraczky, Péter Szocsics, Zsófia Maglóczky

FENS Forum 2024

ePoster

An ergonomic rodent head fixation apparatus for closed-loop cursor control

Halise Erten, Hasan Berke Bilki, Hilal Bulut, Bihter Özhan, Ahsan Ayyaz, Mehmet Kocatürk

FENS Forum 2024