Multisensory Integration
multisensory integration
Dr. Jorge Mejias
The Cognitive and Systems Neuroscience Group is seeking a highly qualified and motivated candidate for a doctoral position in computational neuroscience, under the recently acquired NWA-ORC Consortium grant. The aim of this Consortium is to understand the fundamental principles used by our brains to integrate information in noisy environments and uncertain conditions, and then implement those principles in next-generation algorithms for safe autonomous mobility. Within the Consortium, the main objective of the present PhD project is to develop a biologically realistic computational model of multi-area brain circuits involved in multisensory perception under uncertainty. The model will be constrained by state-of-the-art neuroanatomical data (such as realistic brain connectivity and multiple cell types), and we will identify and study biological aspects of the model which contribute to an optimal integration of sensory information (following Bayesian and other principles). Model predictions will then be compared to experimental data from collaborators. The project will be supervised by Dr. Jorge Mejias, head of the Computational Neuroscience Lab, and Prof. Dr. Cyriel Pennartz, head of the Cognitive & Systems Neuroscience group. The candidate will also closely collaborate with other computational neuroscientists, experimental neuroscientists, theoreticians and machine learning experts. You are expected: -to perform research of multisensory integration and perception using computational neuroscience methods; -to review relevant literature and acquire knowledge on neurobiology, perception and computational neuroscience; -to build biologically realistic multi-area computer models of cortical circuits for multisensory perception, and compare their predictions with experimental findings; -to collaborate with other groups in the Consortium; -to take part in the teaching effort of the group, including supervision of bachelor and master students; -to write scientific manuscripts and a PhD thesis. Our offer: A temporary contract for 38 hours per week for the duration of four years (the initial contract will be for a period of 18 months and after satisfactory evaluation it will be extended to a total duration of four years). This should lead to a dissertation (PhD thesis). We will draft an educational plan that includes attendance of courses and (international) meetings. We also expect you to assist in teaching undergraduates and master students. Based on a full-time appointment (38 hours per week) the gross monthly salary will range from €2,434 in the first year to €3,111 (scale P) in the last year. This is exclusive 8% holiday allowance and 8.3% end-of-year bonus. A favourable tax agreement, the ‘30% ruling’, may apply to non-Dutch applicants. The Collective Labour Agreement of Dutch Universities is applicable.
The Role of Spatial and Contextual Relations of real world objects in Interval Timing
In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.
Measures and models of multisensory integration in reaction times
First, a new measure of MI for reaction times is proposed that takes the entire RT distribution into account. Second, we present some recent developments in TWIN modeling, including a new proposal for the sound-induced flash illusion (SIFI).
Multisensory integration in peripersonal space (PPS) for action, perception and consciousness
Note the later time in the USA!
Hierarchical transformation of visual event timing representations in the human brain: response dynamics in early visual cortex and timing-tuned responses in association cortices
Quantifying the timing (duration and frequency) of brief visual events is vital to human perception, multisensory integration and action planning. For example, this allows us to follow and interact with the precise timing of speech and sports. Here we investigate how visual event timing is represented and transformed across the brain’s hierarchy: from sensory processing areas, through multisensory integration areas, to frontal action planning areas. We hypothesized that the dynamics of neural responses to sensory events in sensory processing areas allows derivation of event timing representations. This would allow higher-level processes such as multisensory integration and action planning to use sensory timing information, without the need for specialized central pacemakers or processes. Using 7T fMRI and neural model-based analyses, we found responses that monotonically increase in amplitude with visual event duration and frequency, becoming increasingly clear from primary visual cortex to lateral occipital visual field maps. Beginning in area MT/V5, we found a gradual transition from monotonic to tuned responses, with response amplitudes peaking at different event timings in different recording sites. While monotonic response components were limited to the retinotopic location of the visual stimulus, timing-tuned response components were independent of the recording sites' preferred visual field positions. These tuned responses formed a network of topographically organized timing maps in superior parietal, postcentral and frontal areas. From anterior to posterior timing maps, multiple events were increasingly integrated, response selectivity narrowed, and responses focused increasingly on the middle of the presented timing range. These results suggest that responses to event timing are transformed from the human brain’s sensory areas to the association cortices, with the event’s temporal properties being increasingly abstracted from the response dynamics and locations of early sensory processing. The resulting abstracted representation of event timing is then propagated through areas implicated in multisensory integration and action planning.
From natural scene statistics to multisensory integration: experiments, models and applications
To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.
What happens to our ability to perceive multisensory information as we age?
Our ability to perceive the world around us can be affected by a number of factors including the nature of the external information, prior experience of the environment, and the integrity of the underlying perceptual system. A particular challenge for the brain is to maintain a coherent perception from information encoded by the peripheral sensory organs whose function is affected by typical, developmental changes across the lifespan. Yet, how the brain adapts to the maturation of the senses, as well as experiential changes in the multisensory environment, is poorly understood. Over the past few years, we have used a range of multisensory tasks to investigate the role of ageing on the brain’s ability to merge sensory inputs. In particular, we have embedded an audio-visual task based on the sound-induced flash illusion (SIFI) into a large-scale, longitudinal study of ageing. Our findings support the idea that the temporal binding window (TBW) is modulated by age and reveal important individual differences in this TBW that may have clinical implications. However, our investigations also suggest the TWB is experience-dependent with evidence for both long and short term behavioural plasticity. An overview of these findings, including recent evidence on how multisensory integration may be associated with higher order functions, will be discussed.
Multisensory Integration: Development, Plasticity, and Translational Applications
Understanding "why": The role of causality in cognition
Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.
Critical Learning Periods for Multisensory Integration in Deep Networks
COSYNE 2023
Examining multisensory integration in weakly electric fish through manipulating sensory salience
FENS Forum 2024
Modality specificity of multisensory integration and decision-making in frontal cortex and superior colliculus
FENS Forum 2024
Neuronal circuit for multisensory integration in higher visual cortex
FENS Forum 2024
Postural constraints affect the optimal weighting of multisensory integration during visuo-manual coordination
FENS Forum 2024