Saliency Map
saliency map
Attention to visual motion: shaping sensation into perception
Evolution has endowed primates, including humans, with a powerful visual system, seemingly providing us with a detailed perception of our surroundings. But in reality the underlying process is one of active filtering, enhancement and reshaping. For visual motion perception, the dorsal pathway in primate visual cortex and in particular area MT/V5 is considered to be of critical importance. Combining physiological and psychophysical approaches we have used the processing and perception of visual motion and area MT/V5 as a model for the interaction of sensory (bottom-up) signals with cognitive (top-down) modulatory influences that characterizes visual perception. Our findings document how this interaction enables visual cortex to actively generate a neural representation of the environment that combines the high-performance sensory periphery with selective modulatory influences for producing an “integrated saliency map’ of the environment.
A new computational framework for understanding vision in our brain
Visual attention selects only a tiny fraction of visual input information for further processing. Selection starts in the primary visual cortex (V1), which creates a bottom-up saliency map to guide the fovea to selected visual locations via gaze shifts. This motivates a new framework that views vision as consisting of encoding, selection, and decoding stages, placing selection on center stage. It suggests a massive loss of non-selected information from V1 downstream along the visual pathway. Hence, feedback from downstream visual cortical areas to V1 for better decoding (recognition), through analysis-by- synthesis, should query for additional information and be mainly directed at the foveal region. Accordingly, non-foveal vision is not only poorer in spatial resolution, but also more susceptible to many illusions.