Visual Stimulus
visual stimulus
A sense without sensors: how non-temporal stimulus features influence the perception and the neural representation of time
Any sensory experience of the world, from the touch of a caress to the smile on our friend’s face, is embedded in time and it is often associated with the perception of the flow of it. The perception of time is therefore a peculiar sensory experience built without dedicated sensors. How the perception of time and the content of a sensory experience interact to give rise to this unique percept is unclear. A few empirical evidences show the existence of this interaction, for example the speed of a moving object or the number of items displayed on a computer screen can bias the perceived duration of those objects. However, to what extent the coding of time is embedded within the coding of the stimulus itself, is sustained by the activity of the same or distinct neural populations and subserved by similar or distinct neural mechanisms is far from clear. Addressing these puzzles represents a way to gain insight on the mechanism(s) through which the brain represents the passage of time. In my talk I will present behavioral and neuroimaging studies to show how concurrent changes of visual stimulus duration, speed, visual contrast and numerosity, shape and modulate brain’s and pupil’s responses and, in case of numerosity and time, influence the topographic organization of these features along the cortical visual hierarchy.
Hierarchical transformation of visual event timing representations in the human brain: response dynamics in early visual cortex and timing-tuned responses in association cortices
Quantifying the timing (duration and frequency) of brief visual events is vital to human perception, multisensory integration and action planning. For example, this allows us to follow and interact with the precise timing of speech and sports. Here we investigate how visual event timing is represented and transformed across the brain’s hierarchy: from sensory processing areas, through multisensory integration areas, to frontal action planning areas. We hypothesized that the dynamics of neural responses to sensory events in sensory processing areas allows derivation of event timing representations. This would allow higher-level processes such as multisensory integration and action planning to use sensory timing information, without the need for specialized central pacemakers or processes. Using 7T fMRI and neural model-based analyses, we found responses that monotonically increase in amplitude with visual event duration and frequency, becoming increasingly clear from primary visual cortex to lateral occipital visual field maps. Beginning in area MT/V5, we found a gradual transition from monotonic to tuned responses, with response amplitudes peaking at different event timings in different recording sites. While monotonic response components were limited to the retinotopic location of the visual stimulus, timing-tuned response components were independent of the recording sites' preferred visual field positions. These tuned responses formed a network of topographically organized timing maps in superior parietal, postcentral and frontal areas. From anterior to posterior timing maps, multiple events were increasingly integrated, response selectivity narrowed, and responses focused increasingly on the middle of the presented timing range. These results suggest that responses to event timing are transformed from the human brain’s sensory areas to the association cortices, with the event’s temporal properties being increasingly abstracted from the response dynamics and locations of early sensory processing. The resulting abstracted representation of event timing is then propagated through areas implicated in multisensory integration and action planning.
How does seeing help listening? Audiovisual integration in Auditory Cortex
Multisensory responses are ubiquitous in so-called unisensory cortex. However, despite their prevalence, we have very little understanding of what – if anything - they contribute to perception. In this talk I will focus on audio-visual integration in auditory cortex. Anatomical tracing studies highlight visual cortex as one source of visual input to auditory cortex. Using cortical cooling we test the hypothesis that these inputs support audiovisual integration in ferret auditory cortex. Behavioural studies in humans support the idea that visual stimuli can help listeners to parse an auditory scene. This effect is paralleled in single units in auditory cortex, where responses to a sound mixture can be determined by the timing of a visual stimulus such that sounds that are temporally coherent with a visual stimulus are preferentially represented. Our recent data therefore support the idea that one role for the early integration of auditory and visual signals in auditory cortex is to support auditory scene analysis, and that visual cortex plays a key role in this process.
Wiring & Rewiring: Experience-Dependent Circuit Development and Plasticity in Sensory Cortices
To build an appropriate representation of the sensory stimuli around the world, neural circuits are wired according to both intrinsic factors and external sensory stimuli. Moreover, the brain circuits have the capacity to rewire in response to altered environment, both during early development and throughout life. In this talk, I will give an overview about my past research in studying the dynamic processes underlying functional maturation and plasticity in rodent sensory cortices. I will also present data about the current and future research in my lab – that is, the synaptic and circuit mechanisms by which the mature brain circuits employ to regulate the balance between stability and plasticity. By applying chronic 2-photon calcium and close-loop visual exposure, we studied the circuit changes at single-neuron resolution to show that concurrent running with visual stimulus is required to drive neuroplasticity in the adult brain.
Interactions between visual cortical neurons that give rise to conscious perception
I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory and can be reported. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual brain. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.
A Cortical Circuit for Audio-Visual Predictions
Team work makes sensory streams work: our senses work together, learn from each other, and stand in for one another, the result of which is perception and understanding. Learned associations between stimuli in different sensory modalities can shape the way we perceive these stimuli (Mcgurk and Macdonald, 1976). During audio-visual associative learning, auditory cortex is thought to underlie multi-modal plasticity in visual cortex (McIntosh et al., 1998; Mishra et al., 2007; Zangenehpour and Zatorre, 2010). However, it is not well understood how processing in visual cortex is altered by an auditory stimulus that is predictive of a visual stimulus and what the mechanisms are that mediate such experience-dependent, audio-visual associations in sensory cortex. Here we describe a neural mechanism by which an auditory input can shape visual representations of behaviorally relevant stimuli through direct interactions between auditory and visual cortices. We show that the association of an auditory stimulus with a visual stimulus in a behaviorally relevant context leads to an experience-dependent suppression of visual responses in primary visual cortex (V1). Auditory cortex axons carry a mixture of auditory and retinotopically-matched visual input to V1, and optogenetic stimulation of these axons selectively suppresses V1 neurons responsive to the associated visual stimulus after, but not before, learning. Our results suggest that cross-modal associations can be stored in long-range cortical connections and that with learning these cross-modal connections function to suppress the responses to predictable input.
Interactions between neurons during visual perception and restoring them in blindness
I will discuss the mechanisms that determine whether a weak visual stimulus will reach consciousness or not. If the stimulus is simple, early visual cortex acts as a relay station that sends the information to higher visual areas. If the stimulus arrives at a minimal strength, it will be stored in working memory. However, during more complex visual perceptions, which for example depend on the segregation of a figure from the background, early visual cortex’ role goes beyond a simply relay. It now acts as a cognitive blackboard and conscious perception depends on it. Our results also inspire new approaches to create a visual prosthesis for the blind, by creating a direct interface with the visual cortex. I will discuss how high-channel-number interfaces with the visual cortex might be used to restore a rudimentary form of vision in blind individuals.
A no-report paradigm reveals that face cells multiplex consciously perceived and suppressed stimuli
Having conscious experience is arguably the most important reason why it matters to us whether we are alive or dead. A powerful paradigm to identify neural correlates of consciousness is binocular rivalry, wherein a constant visual stimulus evokes a varying conscious percept. It has recently been suggested that activity modulations observed during rivalry may represent the act of report rather than the conscious percept itself. Here, we performed single-unit recordings from face patches in macaque inferotemporal (IT) cortex using a novel no-report paradigm in which the animal’s conscious percept was inferred from eye movements. These experiments reveal two new results concerning the neural correlates of consciousness. First, we found that high proportions of IT neurons represented the conscious percept even without active report. Using high-channel recordings, including a new 128-channel Neuropixels-like probe, we were able to decode the conscious percept on single trials. Second, we found that even on single trials, modulation to rivalrous stimuli was weaker than that to unambiguous stimuli, suggesting that cells may encode not only the conscious percept but also the suppressed stimulus. To test this hypothesis, we varied the identity of the suppressed stimulus during binocular rivalry; we found that indeed, we could decode not only the conscious percept but also the suppressed stimulus from neural activity. Moreover, the same cells that were strongly modulated by the conscious percept also tended to be strongly modulated by the suppressed stimulus. Together, our findings indicate that (1) IT cortex possesses a true neural correlate of consciousness even in the absence of report, and (2) this correlate consists of a population code wherein single cells multiplex representation of the conscious percept and veridical physical stimulus, rather than a subset of cells perfectly reflecting consciousness.
Neural correlates of belief updates in the mouse secondary motor cortex
To make judgments, brain must be able to infer the state of the world based on often incomplete and ambiguous evidence. To probe neural circuits that perform the computations underlying such judgments, we developed a behavioral task for mice that required them to detect sustained increases in the speed of a continuously varying visual stimulus. In this talk, I will present evidence that the responses of secondary motor cortex to stimulus fluctuations in this task are consistent with updates of the animal’s state of belief that the change has occurred. These results establish a framework for mechanistic inquiries into neural circuits underlying inference during perceptual decision-making.
Natural stimulus encoding in the retina with linear and nonlinear receptive fields
Popular notions of how the retina encodes visual stimuli typically focus on the center-surround receptive fields of retinal ganglion cells, the output neurons of the retina. In this view, the receptive field acts as a linear filter on the visual stimulus, highlighting spatial contrast and providing efficient representations of natural images. Yet, we also know that many ganglion cells respond vigorously to fine spatial gratings that should not activate the linear filter of the receptive field. Thus, ganglion cells may integrate visual signals nonlinearly across space. In this talk, I will discuss how these (and other) nonlinearities relate to the encoding of natural visual stimuli in the retina. Based on electrophysiological recordings of ganglion and bipolar cells from mouse and salamander retina, I will present methods for assessing nonlinear processing in different cell types and examine their importance and potential function under natural stimulation.
Vision in dynamically changing environments
Many visual systems can process information in dynamically changing environments. In general, visual perception scales with changes in the visual stimulus, or contrast, irrespective of background illumination. This is achieved by adaptation. However, visual perception is challenged when adaptation is not fast enough to deal with sudden changes in overall illumination, for example when gaze follows a moving object from bright sunlight into a shaded area. We have recently shown that the visual system of the fly found a solution by propagating a corrective luminance-sensitive signal to higher processing stages. Using in vivo two-photon imaging and behavioural analyses we showed that distinct OFF-pathway inputs encode contrast and luminance. The luminance-sensitive pathway is particularly required when processing visual motion in contextual dim light, when pure contrast sensitivity underestimates the salience of a stimulus. Recent work in the lab has addressed the question how two visual pathways obtain such fundamentally different sensitivities, given common photoreceptor input. We are furthermore currently working out the network-based strategies by which luminance- and contrast-sensitive signals are combined to guide appropriate visual behaviour. Together, I will discuss the molecular, cellular, and circuit mechanisms that ensure contrast computation, and therefore robust vision, in fast changing visual scenes.
Exploiting color space geometry for visual stimulus design across animals
COSYNE 2022