← Back

Visual Information

Topic spotlight
TopicWorld Wide

visual information

Discover seminars, jobs, and research tagged with visual information across World Wide.
20 curated items19 Seminars1 ePoster
Updated almost 2 years ago
20 items · visual information
20 results
SeminarNeuroscience

Visual mechanisms for flexible behavior

Marlene Cohen
University of Chicago
Jan 25, 2024

Perhaps the most impressive aspect of the way the brain enables us to act on the sensory world is its flexibility. We can make a general inference about many sensory features (rating the ripeness of mangoes or avocados) and map a single stimulus onto many choices (slicing or blending mangoes). These can be thought of as flexibly mapping many (features) to one (inference) and one (feature) to many (choices) sensory inputs to actions. Both theoretical and experimental investigations of this sort of flexible sensorimotor mapping tend to treat sensory areas as relatively static. Models typically instantiate flexibility through changing interactions (or weights) between units that encode sensory features and those that plan actions. Experimental investigations often focus on association areas involved in decision-making that show pronounced modulations by cognitive processes. I will present evidence that the flexible formatting of visual information in visual cortex can support both generalized inference and choice mapping. Our results suggest that visual cortex mediates many forms of cognitive flexibility that have traditionally been ascribed to other areas or mechanisms. Further, we find that a primary difference between visual and putative decision areas is not what information they encode, but how that information is formatted in the responses of neural populations, which is related to difference in the impact of causally manipulating different areas on behavior. This scenario allows for flexibility in the mapping between stimuli and behavior while maintaining stability in the information encoded in each area and in the mappings between groups of neurons.

SeminarNeuroscienceRecording

Estimating repetitive spatiotemporal patterns from resting-state brain activity data

Yusuke Takeda
Computational Brain Dynamics Team, RIKEN Center for Advanced Intelligence Project, Japan; Department of Computational Brain Imaging, ATR Neural Information Analysis Laboratories, Japan
Apr 27, 2023

Repetitive spatiotemporal patterns in resting-state brain activities have been widely observed in various species and regions, such as rat and cat visual cortices. Since they resemble the preceding brain activities during tasks, they are assumed to reflect past experiences embedded in neuronal circuits. Moreover, spatiotemporal patterns involving whole-brain activities may also reflect a process that integrates information distributed over the entire brain, such as motor and visual information. Therefore, revealing such patterns may elucidate how the information is integrated to generate consciousness. In this talk, I will introduce our proposed method to estimate repetitive spatiotemporal patterns from resting-state brain activity data and show the spatiotemporal patterns estimated from human resting-state magnetoencephalography (MEG) and electroencephalography (EEG) data. Our analyses suggest that the patterns involved whole-brain propagating activities that reflected a process to integrate the information distributed over frequencies and networks. I will also introduce our current attempt to reveal signal flows and their roles in the spatiotemporal patterns using a big dataset. - Takeda et al., Estimating repetitive spatiotemporal patterns from resting-state brain activity data. NeuroImage (2016); 133:251-65. - Takeda et al., Whole-brain propagating patterns in human resting-state brain activities. NeuroImage (2021); 245:118711.

SeminarNeuroscience

Restructuring cortical feedback circuits

Andreas Keller
Institute of Molecular and Clinical Ophthalmology, Basel
Nov 2, 2022

We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The mechanistic basis for this fundamental perceptual phenomenon has, however, remained obscure. What enables neurons in the visual system to respond to context when the stimulus is not available? While feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input. These signals are likely used to predict input from the feedforward pathway. Preliminary results show that these feedback projections are strongly influenced by experience and learning.

SeminarPsychology

The role of top-down mechanisms in gaze perception

Nicolas Burra
University of Geneva
Jun 26, 2022

Humans, as a social species, have an increased ability to detect and perceive visual elements involved in social exchanges, such as faces and eyes. The gaze, in particular, conveys information crucial for social interactions and social cognition. Researchers have hypothesized that in order to engage in dynamic face-to-face communication in real time, our brains must quickly and automatically process the direction of another person's gaze. There is evidence that direct gaze improves face encoding and attention capture and that direct gaze is perceived and processed more quickly than averted gaze. These results are summarized as the "direct gaze effect". However, in the recent literature, there is evidence to suggest that the mode of visual information processing modulates the direct gaze effect. In this presentation, I argue that top-down processing, and specifically the relevance of eye features to the task, promotes the early preferential processing of direct versus indirect gaze. On the basis of several recent evidences, I propose that low task relevance of eye features will prevent differences in eye direction processing between gaze directions because its encoding will be superficial. Differential processing of direct and indirect gaze will only occur when the eyes are relevant to the task. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of typical ERP markers such as P1, N170 or P300, we will measure lateralized ERPs (lERPS) such as lateralized N170 and N2pc, which are markers of early face encoding and attentional deployment respectively. I hypothesize that the relevance of the eye feature task is crucial in the direct gaze effect and propose to revisit previous studies, which had questioned the existence of the direct gaze effect. This claim will be illustrate with different past studies and recent preliminary data of my lab. Overall, I propose a systematic evaluation of the role of top-down processing in early direct gaze perception in order to understand the impact of context on gaze perception and, at a larger scope, on social cognition.

SeminarNeuroscience

Perception during visual disruptions

Grace Edwards and Lina Teichmann
National Institute of Mental Health, Laboratory of Brain and Cognition, U.S. Department of Health and Human Services.
Jun 12, 2022

Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption. Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.

SeminarNeuroscience

Feedback controls what we see

Andreas Keller
Institute of Molecular and Clinical Ophthalmology Basel
May 29, 2022

We hardly notice when there is a speck on our glasses, the obstructed visual information seems to be magically filled in. The visual system uses visual context to predict the content of the stimulus. What enables neurons in the visual system to respond to context when the stimulus is not available? In cortex, sensory processing is based on a combination of feedforward information arriving from sensory organs, and feedback information that originates in higher-order areas. Whereas feedforward information drives the activity in cortex, feedback information is thought to provide contextual signals that are merely modulatory. We have made the exciting discovery that mouse primary visual cortical neurons are strongly driven by feedback projections from higher visual areas, in particular when their feedforward sensory input from the retina is missing. This drive is so strong that it makes visual cortical neurons fire as much as if they were receiving a direct sensory input.

SeminarNeuroscienceRecording

The Standard Model of the Retina

Markus Meister
Caltech
May 24, 2022

The science of the retina has reached an interesting stage of completion. There exists now a consensus standard model of this neural system - at least in the minds of many researchers - that serves as a baseline against which to evaluate new claims. The standard model links phenomena from molecular biophysics, cell biology, neuroanatomy, synaptic physiology, circuit function, and visual psychophysics. It is further supported by a normative theory explaining what the purpose is of processing visual information this way. Most new reports of retinal phenomena fit squarely within the standard model, and major revisions seem increasingly unlikely. Given that our understanding of other brain circuits with comparable complexity is much more rudimentary, it is worth considering an example of what success looks like. In this talk I will summarize what I think are the ingredients that led to this mature understanding of the retina. Equally important, a number of practices and concepts that are currently en vogue in neuroscience were not needed or indeed counterproductive. I look forward to debating how these lessons might extend to other areas of brain research.

SeminarPhysics of Life

Retinal neurogenesis and lamination: What to become, where to become it and how to move from there!

Caren Norden
Instituto Gulbenkian de Ciência
Mar 24, 2022

The vertebrate retina is an important outpost of the central nervous system, responsible for the perception and transmission of visual information. It consists of five different types of neurons that reproducibly laminate into three layers, a process of crucial importance for the organ’s function. Unsurprisingly, impaired fate decisions as well as impaired neuronal migrations and lamination lead to impaired retinal function. However, how processes are coordinated at the cellular and tissue level and how variable or robust retinal formation is, is currently still underexplored. In my lab, we aim to shed light on these questions from different angles, studying on the one hand differentiation phenomena and their variability and on the other hand the downstream migration and lamination phenomena. We use zebrafish as our main model system due to its excellent possibilities for live imaging and quantitative developmental biology. More recently we also started to use human retinal organoids as a comparative system. We further employ cross disciplinary approaches to address these issues combining work of cell and developmental biology, biomechanics, theory and computer science. Together, this allows us to integrate cell with tissue-wide phenomena and generate an appreciation of the reproducibility and variability of events.

SeminarNeuroscienceRecording

NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing

Nick Sexton (he/him)
University College London
Nov 30, 2021

A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.

SeminarNeuroscienceRecording

Visual Decisions in Natural Action

Mary Hayhoe
University of Texas, Austin
Nov 8, 2021

Natural behavior reveals the way that gaze serves the needs of the current task, and the complex cognitive control mechanisms that are involved. It has become increasingly clear that even the simplest actions involve complex decision processes that depend on an interaction of visual information, knowledge of the current environment, and the intrinsic costs and benefits of actions choices. I will explore these ideas in the context of walking in natural terrain, where we are able to recover the 3D structure of the visual environment. We show that subjects choose flexible paths that depend on the flatness of the terrain over the next few steps. Subjects trade off flatness with straightness of their paths towards the goal, indicating a nuanced trade-off between stability and energetic costs on both the time scale of the next step and longer-range constraints.

SeminarNeuroscience

Memorability: Prioritizing visual information for memory

Wilma Bainbridge
University of Chicago
Jun 27, 2021

There is a surprising consistency in the images we remember and forget – across observers, certain images are intrinsically more memorable than others in spite of our diverse individual experiences. The perception of images at different memorability levels also results in stereotyped patterns in visual and mnemonic regions in the brain, regardless of an individual’s actual memory for that item. In this talk, Dr. Bainbridge will discuss our current neuroscientific understanding of how memorability is represented in patterns in the brain, potentially serving as a signal for how stimulus information is prioritized for eventual memory encoding.

SeminarNeuroscienceRecording

Neural mechanisms of active vision in the marmoset monkey

Jude Mitchell
University of Rochester
May 11, 2021

Human vision relies on rapid eye movements (saccades) 2-3 times every second to bring peripheral targets to central foveal vision for high resolution inspection. This rapid sampling of the world defines the perception-action cycle of natural vision and profoundly impacts our perception. Marmosets have similar visual processing and eye movements as humans, including a fovea that supports high-acuity central vision. Here, I present a novel approach developed in my laboratory for investigating the neural mechanisms of visual processing using naturalistic free viewing and simple target foraging paradigms. First, we establish that it is possible to map receptive fields in the marmoset with high precision in visual areas V1 and MT without constraints on fixation of the eyes. Instead, we use an off-line correction for eye position during foraging combined with high resolution eye tracking. This approach allows us to simultaneously map receptive fields, even at the precision of foveal V1 neurons, while also assessing the impact of eye movements on the visual information encoded. We find that the visual information encoded by neurons varies dramatically across the saccade to fixation cycle, with most information localized to brief post-saccadic transients. In a second study we examined if target selection prior to saccades can predictively influence how foveal visual information is subsequently processed in post-saccadic transients. Because every saccade brings a target to the fovea for detailed inspection, we hypothesized that predictive mechanisms might prime foveal populations to process the target. Using neural decoding from laminar arrays placed in foveal regions of area MT, we find that the direction of motion for a fixated target can be predictively read out from foveal activity even before its post-saccadic arrival. These findings highlight the dynamic and predictive nature of visual processing during eye movements and the utility of the marmoset as a model of active vision. Funding sources: NIH EY030998 to JM, Life Sciences Fellowship to JY

SeminarNeuroscience

Top-down Modulation in Human Visual Cortex

Mohamed Abdelhack
Washington University in St. Louis
Dec 16, 2020

Human vision flaunts a remarkable ability to recognize objects in the surrounding environment even in the absence of complete visual representation of these objects. This process is done almost intuitively and it was not until scientists had to tackle this problem in computer vision that they noticed its complexity. While current advances in artificial vision systems have made great strides exceeding human level in normal vision tasks, it has yet to achieve a similar robustness level. One cause of this robustness is the extensive connectivity that is not limited to a feedforward hierarchical pathway similar to the current state-of-the-art deep convolutional neural networks but also comprises recurrent and top-down connections. They allow the human brain to enhance the neural representations of degraded images in concordance with meaningful representations stored in memory. The mechanisms by which these different pathways interact are still not understood. In this seminar, studies concerning the effect of recurrent and top-down modulation on the neural representations resulting from viewing blurred images will be presented. Those studies attempted to uncover the role of recurrent and top-down connections in human vision. The results presented challenge the notion of predictive coding as a mechanism for top-down modulation of visual information during natural vision. They show that neural representation enhancement (sharpening) appears to be a more dominant process of different levels of visual hierarchy. They also show that inference in visual recognition is achieved through a Bayesian process between incoming visual information and priors from deeper processing regions in the brain.

SeminarNeuroscienceRecording

Exploring fine detail: The interplay of attention, oculomotor behavior and visual perception in the fovea

Martina Poletti
University of Rochester
Dec 8, 2020

Outside the foveola, visual acuity and other visual functions gradually deteriorate with increasing eccentricity. Humans compensate for these limitations by relying on a tight link between perception and action; rapid gaze shifts (saccades) occur 2-3 times every second, separating brief “fixation” intervals in which visual information is acquired and processed. During fixation, however, the eye is not immobile. Small eye movements incessantly shift the image on the retina even when the attended stimulus is already foveated, suggesting a much deeper coupling between visual functions and oculomotor activity. Thanks to a combination of techniques allowing for high-resolution recordings of eye position, retinal stabilization, and accurate gaze localization, we examined how attention and eye movements are controlled at this scale. We have shown that during fixation, visual exploration of fine spatial detail unfolds following visuomotor strategies similar to those occurring at a larger scale. This behavior compensates for non-homogenous visual capabilities within the foveola and is finely controlled by attention, which facilitates processing at selected foveal locations. Ultimately, the limits of high acuity vision are greatly influenced by the spatiotemporal modulations introduced by fixational eye movements. These findings reveal that, contrary to common intuition, placing a stimulus within the foveola is necessary but not sufficient for high visual acuity; fine spatial vision is the outcome of an orchestrated synergy of motor, cognitive, and attentional factors.

SeminarNeuroscienceRecording

A Rare Visuospatial Disorder

Aimee Dollman
University of Cape Town
Aug 25, 2020

Cases with visuospatial abnormalities provide opportunities for understanding the underlying cognitive mechanisms. Three cases of visual mirror-reversal have been reported: AH (McCloskey, 2009), TM (McCloskey, Valtonen, & Sherman, 2006) and PR (Pflugshaupt et al., 2007). This research reports a fourth case, BS -- with focal occipital cortical dysgenesis -- who displays highly unusual visuospatial abnormalities. They initially produced mirror reversal errors similar to those of AH, who -- like the patient in question -- showed a selective developmental deficit. Extensive examination of BS revealed phenomena such as: mirror reversal errors (sometimes affecting only parts of the visual fields) in both horizontal and vertical planes; subjective representation of visual objects and words in distinct left and right visual fields; subjective duplication of objects of visual attention (not due to diplopia); uncertainty regarding the canonical upright orientation of everyday objects; mirror reversals during saccadic eye movements on oculomotor tasks; and failure to integrate visual with other sensory inputs (e.g., they feel themself moving backwards when visual information shows they are moving forward). Fewer errors are produced under conditions of certain visual variables. These and other findings have led the researchers to conclude that BS draws upon a subjective representation of visual space that is structured phenomenally much as it is anatomically in early visual cortex (i.e., rotated through 180 degrees, split into left and right fields, etc.). Despite this, BS functions remarkably well in their everyday life, apparently due to extensive compensatory mechanisms deployed at higher (executive) processing levels beyond the visual modality.

SeminarNeuroscienceRecording

What the eye tells the brain: Visual feature extraction in the mouse retina

Katrin Franke
University of Tubingen
Jul 6, 2020

Visual processing begins in the retina: within only two synaptic layers, multiple parallel feature channels emerge, which relay highly processed visual information to different parts of the brain. To functionally characterize these feature channels we perform calcium and glutamate population activity recordings at different levels of the mouse retina. This allows following the complete visual signal across consecutive processing stages in a systematic way. In my talk, I will summarize our recent findings on the functional diversity of retinal output channels and how they arise within the retinal network. Specifically, I will talk about the role of inhibition and cell-type specific dendritic processing in generating diverse visual channels. Then, I will focus on how color – a single visual feature – emerges across all retinal processing layers and link our results to behavioral output and the statistics of mouse natural scenes. With our approach, we hope to identify general computational principles of retinal signaling, thereby increasing our understanding of what the eye tells the brain.

ePoster

Frequency tagging in the sensorimotor cortex is enhanced by footstep sounds compared to visual information movement in a walking movement integration task

Marta Matamala-Gomez, Adrià Vilà-Balló, David Cucurell, Ana Tajadura-Jimenez, Antoni Rodriguez-Fornells

FENS Forum 2024