Multisensory
multisensory
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Multisensory perception in the metaverse
Multisensory computations underlying flavor perception and food choice
Where are you Moving? Assessing Precision, Accuracy, and Temporal Dynamics in Multisensory Heading Perception Using Continuous Psychophysics
Time perception in film viewing as a function of film editing
Filmmakers and editors have empirically developed techniques to ensure the spatiotemporal continuity of a film's narration. In terms of time, editing techniques (e.g., elliptical, overlapping, or cut minimization) allow for the manipulation of the perceived duration of events as they unfold on screen. More specifically, a scene can be edited to be time compressed, expanded, or real-time in terms of its perceived duration. Despite the consistent application of these techniques in filmmaking, their perceptual outcomes have not been experimentally validated. Given that viewing a film is experienced as a precise simulation of the physical world, the use of cinematic material to examine aspects of time perception allows for experimentation with high ecological validity, while filmmakers gain more insight on how empirically developed techniques influence viewers' time percept. Here, we investigated how such time manipulation techniques of an action affect a scene's perceived duration. Specifically, we presented videos depicting different actions (e.g., a woman talking on the phone), edited according to the techniques applied for temporal manipulation and asked participants to make verbal estimations of the presented scenes' perceived durations. Analysis of data revealed that the duration of expanded scenes was significantly overestimated as compared to that of compressed and real-time scenes, as was the duration of real-time scenes as compared to that of compressed scenes. Therefore, our results validate the empirical techniques applied for the modulation of a scene's perceived duration. We also found interactions on time estimates of scene type and editing technique as a function of the characteristics and the action of the scene presented. Thus, these findings add to the discussion that the content and characteristics of a scene, along with the editing technique applied, can also modulate perceived duration. Our findings are discussed by considering current timing frameworks, as well as attentional saliency algorithms measuring the visual saliency of the presented stimuli.
The Role of Spatial and Contextual Relations of real world objects in Interval Timing
In the real world, object arrangement follows a number of rules. Some of the rules pertain to the spatial relations between objects and scenes (i.e., syntactic rules) and others about the contextual relations (i.e., semantic rules). Research has shown that violation of semantic rules influences interval timing with the duration of scenes containing such violations to be overestimated as compared to scenes with no violations. However, no study has yet investigated whether both semantic and syntactic violations can affect timing in the same way. Furthermore, it is unclear whether the effect of scene violations on timing is due to attentional or other cognitive accounts. Using an oddball paradigm and real-world scenes with or without semantic and syntactic violations, we conducted two experiments on whether time dilation will be obtained in the presence of any type of scene violation and the role of attention in any such effect. Our results from Experiment 1 showed that time dilation indeed occurred in the presence of syntactic violations, while time compression was observed for semantic violations. In Experiment 2, we further investigated whether these estimations were driven by attentional accounts, by utilizing a contrast manipulation of the target objects. The results showed that an increased contrast led to duration overestimation for both semantic and syntactic oddballs. Together, our results indicate that scene violations differentially affect timing due to violation processing differences and, moreover, their effect on timing seems to be sensitive to attentional manipulations such as target contrast.
Measures and models of multisensory integration in reaction times
First, a new measure of MI for reaction times is proposed that takes the entire RT distribution into account. Second, we present some recent developments in TWIN modeling, including a new proposal for the sound-induced flash illusion (SIFI).
Bayesian expectation in the perception of the timing of stimulus sequences
In the current virtual journal club Dr Di Luca will present findings from a series of psychophysical investigations where he measured sensitivity and bias in the perception of the timing of stimuli. He will present how improved detection with longer sequences and biases in reporting isochrony can be accounted for by optimal statistical predictions. Among his findings was also that the timing of stimuli that occasionally deviate from a regularly paced sequence is perceptually distorted to appear more regular. Such change depends on whether the context these sequences are presented is also regular. Dr Di Luca will present a Bayesian model for the combination of dynamically updated expectations, in the form of a priori probability, with incoming sensory information. These findings contribute to the understanding of how the brain processes temporal information to shape perceptual experiences.
Multisensory perception, learning, and memory
Note the later start time!
Making Sense of Our Senses: Multisensory Processes across the Human Lifespan
Multisensory integration in peripersonal space (PPS) for action, perception and consciousness
Note the later time in the USA!
Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception
To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.
Prosody in the voice, face, and hands changes which words you hear
Speech may be characterized as conveying both segmental information (i.e., about vowels and consonants) as well as suprasegmental information - cued through pitch, intensity, and duration - also known as the prosody of speech. In this contribution, I will argue that prosody shapes low-level speech perception, changing which speech sounds we hear. Perhaps the most notable example of how prosody guides word recognition is the phenomenon of lexical stress, whereby suprasegmental F0, intensity, and duration cues can distinguish otherwise segmentally identical words, such as "PLAto" vs. "plaTEAU" in Dutch. Work from our group showcases the vast variability in how different talkers produce stressed vs. unstressed syllables, while also unveiling the remarkable flexibility with which listeners can learn to handle this between-talker variability. It also emphasizes that lexical stress is a multimodal linguistic phenomenon, with the voice, lips, and even hands conveying stress in concert. In turn, human listeners actively weigh these multisensory cues to stress depending on the listening conditions at hand. Finally, lexical stress is presented as having a robust and lasting impact on low-level speech perception, even down to changing vowel perception. Thus, prosody - in all its multisensory forms - is a potent factor in speech perception, determining what speech sounds we hear.
How the brain uses experience to construct its multisensory capabilities
This talk will not be recorded
Multisensory processing of anticipatory and consummatory food cues
Multisensory influences on vision: Sounds enhance and alter visual-perceptual processing
Visual perception is traditionally studied in isolation from other sensory systems, and while this approach has been exceptionally successful, in the real world, visual objects are often accompanied by sounds, smells, tactile information, or taste. How is visual processing influenced by these other sensory inputs? In this talk, I will review studies from our lab showing that a sound can influence the perception of a visual object in multiple ways. In the first part, I will focus on spatial interactions between sound and sight, demonstrating that co-localized sounds enhance visual perception. Then, I will show that these cross-modal interactions also occur at a higher contextual and semantic level, where naturalistic sounds facilitate the processing of real-world objects that match these sounds. Throughout my talk I will explore to what extent sounds not only improve visual processing but also alter perceptual representations of the objects we see. Most broadly, I will argue for the importance of considering multisensory influences on visual perception for a more complete understanding of our visual experience.
Multisensory perception with newly learned sensory skills
It’s All About Motion: Functional organization of the multisensory motion system at 7T
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will suggest that this mechanism may play a role in evaluating multisensory responses for visual, tactile and auditory stimuli in hMT+.
Using multisensory plasticity to rehabilitate vision
Hierarchical transformation of visual event timing representations in the human brain: response dynamics in early visual cortex and timing-tuned responses in association cortices
Quantifying the timing (duration and frequency) of brief visual events is vital to human perception, multisensory integration and action planning. For example, this allows us to follow and interact with the precise timing of speech and sports. Here we investigate how visual event timing is represented and transformed across the brain’s hierarchy: from sensory processing areas, through multisensory integration areas, to frontal action planning areas. We hypothesized that the dynamics of neural responses to sensory events in sensory processing areas allows derivation of event timing representations. This would allow higher-level processes such as multisensory integration and action planning to use sensory timing information, without the need for specialized central pacemakers or processes. Using 7T fMRI and neural model-based analyses, we found responses that monotonically increase in amplitude with visual event duration and frequency, becoming increasingly clear from primary visual cortex to lateral occipital visual field maps. Beginning in area MT/V5, we found a gradual transition from monotonic to tuned responses, with response amplitudes peaking at different event timings in different recording sites. While monotonic response components were limited to the retinotopic location of the visual stimulus, timing-tuned response components were independent of the recording sites' preferred visual field positions. These tuned responses formed a network of topographically organized timing maps in superior parietal, postcentral and frontal areas. From anterior to posterior timing maps, multiple events were increasingly integrated, response selectivity narrowed, and responses focused increasingly on the middle of the presented timing range. These results suggest that responses to event timing are transformed from the human brain’s sensory areas to the association cortices, with the event’s temporal properties being increasingly abstracted from the response dynamics and locations of early sensory processing. The resulting abstracted representation of event timing is then propagated through areas implicated in multisensory integration and action planning.
Multisensory interactions in temporal frequency processing
ItsAllAboutMotion: Encoding of speed in the human Middle Temporal cortex
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will show that this mechanism plays a role in evaluating multisensory responses for visual, tactile and auditory motion stimuli in hMT+.
The Multisensory Scaffold for Perception and Rehabilitation
Healing the brain via Multisensory technologies and using these technologies to better understand the brain
The effect of gravity on the perception of distance and self-motion: a multisensory perspective
Gravity is a constant in our lives. It provides an internalized reference to which all other perceptions are related. We can experimentally manipulate the relationship between physical gravity with other cues to the direction of “up” using virtual reality - with either HMDs or specially built tilting environments - to explore how gravity contributes to perceptual judgements. The effect of gravity can also be cancelled by running experiments on the International Space Station in low Earth orbit. Changing orientation relative to gravity - or even just perceived orientation – affects your perception of how far away things are (they appear closer when supine or prone). Cancelling gravity altogether has a similar effect. Changing orientation also affects how much visual motion is needed to perceive a particular travel distance (you need less when supine or prone). Adapting to zero gravity has the opposite effect (you need more). These results will be discussed in terms of their practical consequences and the multisensory processes involved, in particular the response to visual-vestibular conflict.
From natural scene statistics to multisensory integration: experiments, models and applications
To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.
The vestibular system: a multimodal sense
The vestibular system plays an essential role in everyday life, contributing to a surprising range of functions from reflexes to the highest levels of perception and consciousness. Three orthogonal semicircular canals detect rotational movements of the head and the otolith organs sense translational acceleration, including the gravitational vertical. But, how vestibular signals are encoded by the human brain? We have recently combined innovative methods for eliciting virtual rotation and translation sensations with fMRI to identify brain areas representing vestibular signals. We have identified a bilateral inferior parietal, ventral premotor/anterior insula and prefrontal network and confirmed that these areas reliably possess information about the rotation and translation. We have also investigated how vestibular signals are integrated with other sensory cues to generate our perception of the external environment.
What happens to our ability to perceive multisensory information as we age?
Our ability to perceive the world around us can be affected by a number of factors including the nature of the external information, prior experience of the environment, and the integrity of the underlying perceptual system. A particular challenge for the brain is to maintain a coherent perception from information encoded by the peripheral sensory organs whose function is affected by typical, developmental changes across the lifespan. Yet, how the brain adapts to the maturation of the senses, as well as experiential changes in the multisensory environment, is poorly understood. Over the past few years, we have used a range of multisensory tasks to investigate the role of ageing on the brain’s ability to merge sensory inputs. In particular, we have embedded an audio-visual task based on the sound-induced flash illusion (SIFI) into a large-scale, longitudinal study of ageing. Our findings support the idea that the temporal binding window (TBW) is modulated by age and reveal important individual differences in this TBW that may have clinical implications. However, our investigations also suggest the TWB is experience-dependent with evidence for both long and short term behavioural plasticity. An overview of these findings, including recent evidence on how multisensory integration may be associated with higher order functions, will be discussed.
NMC4 Short Talk: Neurocomputational mechanisms of causal inference during multisensory processing in the macaque brain
Natural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.
How does seeing help listening? Audiovisual integration in Auditory Cortex
Multisensory responses are ubiquitous in so-called unisensory cortex. However, despite their prevalence, we have very little understanding of what – if anything - they contribute to perception. In this talk I will focus on audio-visual integration in auditory cortex. Anatomical tracing studies highlight visual cortex as one source of visual input to auditory cortex. Using cortical cooling we test the hypothesis that these inputs support audiovisual integration in ferret auditory cortex. Behavioural studies in humans support the idea that visual stimuli can help listeners to parse an auditory scene. This effect is paralleled in single units in auditory cortex, where responses to a sound mixture can be determined by the timing of a visual stimulus such that sounds that are temporally coherent with a visual stimulus are preferentially represented. Our recent data therefore support the idea that one role for the early integration of auditory and visual signals in auditory cortex is to support auditory scene analysis, and that visual cortex plays a key role in this process.
Conflict in Multisensory Perception
Multisensory perception is often studied through the effects of inter-sensory conflict, such as in the McGurk effect, the Ventriloquist illusion, and the Rubber Hand Illusion. Moreover, Bayesian approaches to cue fusion and causal inference overwhelmingly draw on cross-modal conflict to measure and to model multisensory perception. Given the prevalence of conflict, it is remarkable that accounts of multisensory perception have so far neglected the theory of conflict monitoring and cognitive control, established about twenty years ago. I hope to make a case for the role of conflict monitoring and resolution during multisensory perception. To this end, I will present EEG and fMRI data showing that cross-modal conflict in speech, resulting in either integration or segregation, triggers neural mechanisms of conflict detection and resolution. I will also present data supporting a role of these mechanisms during perceptual conflict in general, using Binocular Rivalry, surrealistic imagery, and cinema. Based on this preliminary evidence, I will argue that it is worth considering the potential role of conflict in multisensory perception and its incorporation in a causal inference framework. Finally, I will raise some potential problems associated with this proposal.
Migraine: a disorder of excitatory-inhibitory balance in multiple brain networks? Insights from genetic mouse models of the disease
Migraine is much more than an episodic headache. It is a complex brain disorder, characterized by a global dysfunction in multisensory information processing and integration. In a third of patients, the headache is preceded by transient sensory disturbances (aura), whose neurophysiological correlate is cortical spreading depression (CSD). The molecular, cellular and circuit mechanisms of the primary brain dysfunctions that underlie migraine onset, susceptibility to CSD and altered sensory processing remain largely unknown and are major open issues in the neurobiology of migraine. Genetic mouse models of a rare monogenic form of migraine with aura provide a unique experimental system to tackle these key unanswered questions. I will describe the functional alterations we have uncovered in the cerebral cortex of genetic mouse models and discuss the insights into the cellular and circuit mechanisms of migraine obtained from these findings.
Development of multisensory perception and attention and their role in audiovisual speech processing
Do you hear what I see: Auditory motion processing in blind individuals
Perception of object motion is fundamentally multisensory, yet little is known about similarities and differences in the computations that give rise to our experience across senses. Insight can be provided by examining auditory motion processing in early blind individuals. In those who become blind early in life, the ‘visual’ motion area hMT+ responds to auditory motion. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion, suggesting competition between cortical areas for functional role. According to the metamodal hypothesis of cross-modal plasticity developed by Pascual-Leone, the recruitment of hMT+ is driven by it being a metamodal structure containing “operators that execute a given function or computation regardless of sensory input modality”. Thus, the metamodal hypothesis predicts that the computations underlying auditory motion processing in early blind individuals should be analogous to visual motion processing in sighted individuals - relying on non-separable spatiotemporal filters. Inconsistent with the metamodal hypothesis, evidence suggests that the computational algorithms underlying auditory motion processing in early blind individuals fail to undergo a qualitative shift as a result of cross-modal plasticity. Auditory motion filters, in both blind and sighted subjects, are separable in space and time, suggesting that the recruitment of hMT+ to extract motion information from auditory input includes a significant modification of its normal computational operations.
Seeing with technology: Exchanging the senses with sensory substitution and augmentation
What is perception? Our sensory modalities transmit information about the external world into electrochemical signals that somehow give rise to our conscious experience of our environment. Normally there is too much information to be processed in any given moment, and the mechanisms of attention focus the limited resources of the mind to some information at the expense of others. My research has advanced from first examining visual perception and attention to now examine how multisensory processing contributes to perception and cognition. There are fundamental constraints on how much information can be processed by the different senses on their own and in combination. Here I will explore information processing from the perspective of sensory substitution and augmentation, and how "seeing" with the ears and tongue can advance fundamental and translational research.
Plasticity and learning in multisensory perception for action
Multisensory Integration: Development, Plasticity, and Translational Applications
Multisensory speech perception
Music training effects on multisensory and cross-sensory transfer processing: from cross-sectional to RCT studies
Multisensory self in spatial navigation
The emergence of a ‘V1 like’ structure for soundscapes representing vision in the adult brain in the absence of visual experience
Multisensory encoding of self-motion in the retrosplenial cortex and beyond
In order to successfully navigate through the environment, animals must accurately estimate the status of their motion with respect to the surrounding scene and objects. In this talk, I will present our recent work on how retrosplenial cortical (RSC) neurons combine vestibular and visual signals to reliably encode the direction and speed of head turns during passive motion and active navigation. I will discuss these data in the context of RSC long-range connectivity and further show our ongoing work on building population-level models of motion representation across cortical and subcortical networks.
Multisensory development and the role of visual experience
Science and technology to understand developmental multisensory processing
Clinical, Cognitive and Neuroscience Insights into Multisensory Processes
Brain (re)organization and sensory deprivation: Recycling the multisensory scaffolding of functional brain networks
Understanding "why": The role of causality in cognition
Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.
How multisensory perception is shaped by causal inference and serial effects
The effect of gravity on the perception of distance and self-motion
Gravity is a constant in our lives. It provides an internalized reference to which all other perceptions are related. We can experimentally manipulate the relationship between physical gravity with other cues to the direction of “up” using virtual reality - with either HMDs or specially built tilting environments - to explore how gravity contributes to perceptual judgements. The effect of gravity can also be cancelled by running experiments on the International Space Station in low Earth orbit. Changing orientation relative to gravity - or even just perceived orientation – affects your perception of how far away things are (they appear closer when supine or prone). Cancelling gravity altogether has a similar effect. Changing orientation also affects how much visual motion is needed to perceive a particular travel distance (you need less when supine or prone). Adapting to zero gravity has the opposite effect (you need more). These results will be discussed in terms of their practical consequences and the multisensory processes involved, in particular the response to visual-vestibular conflict.
Applications of Multisensory Facilitation of Learning
In this talk I’ll discuss translation of findings of multisensory facilitation of learning to cognitive training. I’ll first review some early findings of multisensory facilitation of learning and then discuss how we have been translating these basic science approaches into gamified training interventions to improve cognitive functions. I’ll touch on approaches to training vision, hearing and working memory that we are developing at the UCR Brain Game Center for Mental Fitness and Well-being. I look forward to discussing both the basic science but also the complexities of how to translate approaches from basic science into the more complex frameworks often used in interventions.
Multisensory Perception: Behaviour, Computations and Neural Mechanisms
Our senses are constantly bombarded with a myriad of diverse signals. Transforming this sensory cacophony into a coherent percept of our environment relies on solving two computational challenges: First, we need to solve the causal inference problem - deciding whether signals come from a common cause and thus should be integrated, or come from different sources and be treated independently. Second, when there is a common cause, we should integrate signals across the senses weighted in proportion to their sensory reliabilities. I discuss recent research at the behavioural, computational and neural systems level that investigates how the brain addresses these two computational challenges in multisensory perception.
Blood is thicker than water
According to Hamilton’s inclusive fitness hypothesis, kinship is an organizing principle of social behavior. Behavioral evidence supporting this hypothesis includes the ability to recognize kin and the adjustment of behavior based on kin preference with respect to altruism, attachment and care for offspring in insect societies. Despite the fundamental importance of kinship behavior, the underlying neural mechanisms are poorly understood. We repeated behavioral experiments by Hepper on behavioral preference of rats for their kin. Consistent with Hepper’s work, we find a developmental time course for kinship behavior, where rats prefer sibling interactions at young ages and express non-sibling preferences at older ages. In probing the brain areas responsible for this behavior, we find that aspiration lesions of the lateral septum but not control lesions of cingulate cortices eliminate the behavioral preference in young animals for their siblings and in older rats for non-siblings. We then presented awake and anaesthetized rats with odors and calls of age- and status-matched kin (siblings and mothers) and non-kin (non-siblings and non-mothers) conspecifics, while performing in vivo juxta-cellular and whole-cell patch-clamp recordings in the lateral septum. We find multisensory (olfactory and auditory) neuronal responses, whereby neurons typically responded preferentially but not exclusively to individual social stimuli. Non-kin-odor responsive neurons were found dorsally, while kin-odor responsive neurons were located in ventrally in the lateral septum. To our knowledge such an ordered representation of response preferences according to kinship has not been previously observed and we refer this organization as nepotopy. Nepotopy could be instrumental in reading out kinship from preferential but not exclusive responses and in the generation of differential behavior according to kinship. Thus, our results are consistent with a role of the lateral septum in organizing mammalian kinship behavior.
Non-feedforward architectures enable diverse multisensory computations
Bernstein Conference 2024
Recurrence in temporal multisensory processing
Bernstein Conference 2024
Investigation of a multilevel multisensory circuit underlying female decision making in Drosophila
COSYNE 2022
Investigation of a multilevel multisensory circuit underlying female decision making in Drosophila
COSYNE 2022
Critical Learning Periods for Multisensory Integration in Deep Networks
COSYNE 2023
Topography of multisensory convergence throughout the mouse cortex
COSYNE 2023
An automated behavioral platform for multisensory decision-making in mice
FENS Forum 2024
Behavioral regression in Syn II KO mice: From latent synaptopathy to overt dysfunctions in multisensory social processing
FENS Forum 2024
Examining multisensory integration in weakly electric fish through manipulating sensory salience
FENS Forum 2024
Modality specificity of multisensory integration and decision-making in frontal cortex and superior colliculus
FENS Forum 2024
Multisensory stimulation improves target tracking in zebrafish during rheotaxis
FENS Forum 2024
Neuronal circuit for multisensory integration in higher visual cortex
FENS Forum 2024
A novel multisensory stimulation setup for refuge tracking in weakly electric fish
FENS Forum 2024
Postural constraints affect the optimal weighting of multisensory integration during visuo-manual coordination
FENS Forum 2024
Touching what you see: Multisensory location coding in mouse posterior parietal cortex
FENS Forum 2024
A virtual-reality task to investigate multisensory object recognition in mice
FENS Forum 2024