Cue Integration
cue integration
Distance-tuned neurons drive specialized path integration calculations in medial entorhinal cortex
During navigation, animals estimate their position using path integration and landmarks, engaging many brain areas. Whether these areas follow specialized or universal cue integration principles remains incompletely understood. We combine electrophysiology with virtual reality to quantify cue integration across thousands of neurons in three navigation-relevant areas: primary visual cortex (V1), retrosplenial cortex (RSC), and medial entorhinal cortex (MEC). Compared with V1 and RSC, path integration influences position estimates more in MEC, and conflicts between path integration and landmarks trigger remapping more readily. Whereas MEC codes position prospectively, V1 codes position retrospectively, and RSC is intermediate between the two. Lowered visual contrast increases the influence of path integration on position estimates only in MEC. These properties are most pronounced in a population of MEC neurons, overlapping with grid cells, tuned to distance run in darkness. These results demonstrate the specialized role that path integration plays in MEC compared with other navigation-relevant cortical areas.
Looking and listening while moving
In this talk I’ll discuss our recent work on how visual and auditory cues to space are integrated as we move. There are at least 3 reasons why this turns out to be a difficult problem for the brain to solve (and us to understand!). First, vision and hearing start off in different coordinates (eye-centred vs head-centred), so they need a common reference frame in which to communicate. By preventing eye and head movements, this problem has been neatly sidestepped in the literature, yet self-movement is the norm. Second, self-movement creates visual and auditory image motion. Correct interpretation therefore requires some form of compensation. Third, vision and hearing encode motion in very different ways: vision contains dedicated motion detectors sensitive to speed, whereas hearing does not. We propose that some (all?) of these problems could be solved by considering the perception of audiovisual space as the integration of separate body-centred visual and auditory cues, the latter formed by integrating image motion with motor system signals and vestibular information. To test this claim, we use a classic cue integration framework, modified to account for cues that are biased and partially correlated. We find good evidence for the model based on simple judgements of audiovisual motion within a circular array of speakers and LEDs that surround the participant while they execute self-controlled head movement.