Motion Perception
motion perception
Prof. Alan A. Stocker
The position is part of the ongoing NSF-funded project ‘Choice-induced biases in human decision-making’ in collaboration with the laboratory of Tobias Donner at the University Medical Center Hamburg, Germany. The goal of the project is to understand how decisions influence the memory of past (consistency bias) but also the evaluation of future evidence (confirmation bias) in human decision-making. The project employs a highly interdisciplinary approach that combines psychophysical and functional neuroimaging (MEG) experiments with theory and computational modeling.
Prof Jae-Hyun Jung
Postdoc Fellow in virtual reality and mobility studies - Jung Lab (Schepens Eye Research Institute, Harvard Medical School) Schepens Eye Research Institute/Mass. Eye and Ear, Harvard Medical School has an opening for one full-time postdoc fellow to work with Dr. Jae-Hyun Jung (https://scholar.harvard.edu/jaehyun_jung) in the Mobility Enhancement and Vision Rehabilitation Center of Excellence. The position is initially available for one year with the possibility of extension for additional more years. Ph.D. in any area related to visual perception (e.g., vision/neuroscience, computer science, electrical engineering, or optometry) include any of below topics: Motion perception, Mobility simulation, Stereoscopic depth perception, Attention switching, or Contrast/Saliency modeling The successful candidate will make major contributions to a current NIH-funded project evaluating field expansion in mobility and other pilot projects related to AR/VR devices. Proficiency in programming for experiment design, experience with human subject study, and good problem-solving skills are required. Experience with VR/AR devices, Unity/Unreal programming, or experience with people with vision impairment would be a plus. The position is open and available right now. Salary will be according to the NIH scale for postdoctoral fellows. Start date is flexible but ideally as soon as possible. Applications will be reviewed until the position is filled. Applications should include a CV, a letter of interest, and the expected date of availability in PDF. Please email applications to Jae-Hyun Jung (jaehyun_jung@meei.harvard.edu) Schepens Eye Research Institute of Mass. Eye and Ear, Harvard Medical School is located in Boston with a strong research community of faculty, postdoctoral fellows, and research assistants with interdisciplinary backgrounds. The position also provides the opportunity to participate in the Schepens postdoc/research training program for scientific integrity and other general issues of interest to young scientists and also to develop additional collaborations with the research community at the Schepens, which includes multiple Center of Excellence in Harvard Medical School.
Prof Jae-Hyun Jung
Postdoc Fellow in virtual reality and mobility studies - Jung Lab (Schepens Eye Research Institute, Harvard Medical School) Schepens Eye Research Institute/Mass. Eye and Ear, Harvard Medical School has an opening for one full-time postdoc fellow to work with Dr. Jae-Hyun Jung (https://scholar.harvard.edu/jaehyun_jung) in the Mobility Enhancement and Vision Rehabilitation Center of Excellence. The position is initially available for one year with the possibility of extension for additional more years. Ph.D. in any area related to visual perception (e.g., vision/neuroscience, computer science, electrical engineering, or optometry) include any of below topics: Motion perception, Mobility simulation, Stereoscopic depth perception, Attention switching, or Contrast/Saliency modeling The successful candidate will make major contributions to a current NIH-funded project evaluating field expansion in mobility and other pilot projects related to AR/VR devices. Proficiency in programming for experiment design, experience with human subject study, and good problem-solving skills are required. Experience with VR/AR devices, Unity/Unreal programming, or experience with people with vision impairment would be a plus. The position is open and available right now. Salary will be according to the NIH scale for postdoctoral fellows. Start date is flexible but ideally as soon as possible. Applications will be reviewed until the position is filled. Applications should include a CV, a letter of interest, and the expected date of availability in PDF. Please email applications to Jae-Hyun Jung (jaehyun_jung@meei.harvard.edu) Schepens Eye Research Institute of Mass. Eye and Ear, Harvard Medical School is located in Boston with a strong research community of faculty, postdoctoral fellows, and research assistants with interdisciplinary backgrounds. The position also provides the opportunity to participate in the Schepens postdoc/research training program for scientific integrity and other general issues of interest to young scientists and also to develop additional collaborations with the research community at the Schepens, which includes multiple Center of Excellence in Harvard Medical School.
Vocal emotion perception at millisecond speed
The human voice is possibly the most important sound category in the social landscape. Compared to other non-verbal emotion signals, the voice is particularly effective in communicating emotions: it can carry information over large distances and independent of sight. However, the study of vocal emotion expression and perception is surprisingly far less developed than the study of emotion in faces. Thereby, its neural and functional correlates remain elusive. As the voice represents a dynamically changing auditory stimulus, temporally sensitive techniques such as the EEG are particularly informative. In this talk, the dynamic neurocognitive operations that take place when we listen to vocal emotions will be specified, with a focus on the effects of stimulus type, task demands, and speaker and listener characteristics (e.g., age). These studies suggest that emotional voice perception is not only a matter of how one speaks but also of who speaks and who listens. Implications of these findings for the understanding of psychiatric disorders such as schizophrenia will be discussed.
Neural mechanisms underlying visual and vestibular self-motion perception
Heading perception in crowded environments
Self-motion through a visual world creates a pattern of expanding visual motion called optic flow. Heading estimation from the optic flow is accurate in rigid environments. But it becomes challenging when other humans introduce an independent motion to the scene. The biological motion of human walkers consists of translation through space and associated limb articulation. The characteristic motion pattern is regular, though complex. A world full of humans moving around is nonrigid, causing heading errors. But limb articulation alone does not perturb the global structure of the flow field, matching the rigidity assumption. For heading perception from optic flow analysis, limb articulation alone should not impair heading estimates. But we observed heading biases when participants encountered a group of point-light walkers. Our research investigates the interactions between optic flow perception and biological motion perception. We further analyze the impact of environmental information.
Connecting structure and function in early visual circuits
How does the brain interpret signals from the outside world? Walking through a park, you might take for granted the ease with which you can understand what you see. Rather than seeing a series of still snapshots, you are able to see simple, fluid movement — of dogs running, squirrels foraging, or kids playing basketball. You can track their paths and know where they are headed without much thought. “How does this process take place?” asks Rudy Behnia, PhD, a principal investigator at Columbia’s Mortimer B. Zuckerman Mind Brain Behavior Institute. “For most of us, it’s hard to imagine a world where we can’t see motion, shapes, and color; where we can’t have a representation of the physical world in our head.” And yet this representation does not happen automatically — our brain has no direct connection with the outside world. Instead, it interprets information taken in by our senses. Dr. Behnia is studying how the brain builds these representations. As a starting point, she focuses on how we see motion
Attention to visual motion: shaping sensation into perception
Evolution has endowed primates, including humans, with a powerful visual system, seemingly providing us with a detailed perception of our surroundings. But in reality the underlying process is one of active filtering, enhancement and reshaping. For visual motion perception, the dorsal pathway in primate visual cortex and in particular area MT/V5 is considered to be of critical importance. Combining physiological and psychophysical approaches we have used the processing and perception of visual motion and area MT/V5 as a model for the interaction of sensory (bottom-up) signals with cognitive (top-down) modulatory influences that characterizes visual perception. Our findings document how this interaction enables visual cortex to actively generate a neural representation of the environment that combines the high-performance sensory periphery with selective modulatory influences for producing an “integrated saliency map’ of the environment.
Looking and listening while moving
In this talk I’ll discuss our recent work on how visual and auditory cues to space are integrated as we move. There are at least 3 reasons why this turns out to be a difficult problem for the brain to solve (and us to understand!). First, vision and hearing start off in different coordinates (eye-centred vs head-centred), so they need a common reference frame in which to communicate. By preventing eye and head movements, this problem has been neatly sidestepped in the literature, yet self-movement is the norm. Second, self-movement creates visual and auditory image motion. Correct interpretation therefore requires some form of compensation. Third, vision and hearing encode motion in very different ways: vision contains dedicated motion detectors sensitive to speed, whereas hearing does not. We propose that some (all?) of these problems could be solved by considering the perception of audiovisual space as the integration of separate body-centred visual and auditory cues, the latter formed by integrating image motion with motor system signals and vestibular information. To test this claim, we use a classic cue integration framework, modified to account for cues that are biased and partially correlated. We find good evidence for the model based on simple judgements of audiovisual motion within a circular array of speakers and LEDs that surround the participant while they execute self-controlled head movement.
The role of motion in localizing objects
Everything we see has a location. We know where things are before we know what they are. But how do we know where things are? Receptive fields in the visual system specify location but neural delays lead to serious errors whenever targets or eyes are moving. Motion may be the problem here but motion can also be the solution, correcting for the effects of delays and eye movements. To demonstrate this, I will present results from three motion illusions where perceived location differs radically from physical location. These help understand how and where position is coded. We first look at the effects of a target’s simple forward motion on its perceived location. Second, we look at perceived location of a target that has internal motion as well as forward motion. The two directions combine to produce an illusory path. This “double-drift” illusion strongly affects perceived position but, surprisingly, not eye movements or attention. Even more surprising, fMRI shows that the shifted percept does not emerge in the visual cortex but is seen instead in the frontal lobes. Finally, we report that a moving frame also shifts the perceived positions of dots flashed within it. Participants report the dot positions relative to the frame, as if the frame were not moving. These frame-induced position effects suggest a link to visual stability where we see a steady world despite massive displacements during saccades. These motion-based effects on perceived location lead to new insights concerning how and where position is coded in the brain.
Stereo vision and prey detection in the praying mantis
Praying mantises are the only insects known to have stereo vision. We used a comparative approach to determine how the mechanisms underlying stereopsis in mantises differ from those underlying primate stereo vision. By testing mantises with virtual 3D targets we showed that mantis stereopsis enables prey capture in complex scenes but the mechanisms underlying it differ from those underlying primate stereopsis. My talk will further discuss how stereopsis combines with second-order motion perception to enable the detection of camouflaged prey by mantises. The talk will highlight the benefits of a comparative approach towards understanding visual cognition.
The developing visual brain – answers and questions
We will start our talk with a short video of our research, illustrating methods (some old and new) and findings that have provided our current understanding of how visual capabilities develop in infancy and early childhood. However, our research poses some outstanding questions. We will briefly discuss three issues, which are linked by a common focus on the development of visual attentional processing: (1) How do recurrent cortical loops contribute to development? Cortical selectivity (e.g., to orientation, motion, and binocular disparity) develops in the early months of life. However, these systems are not purely feedforward but depend on parallel pathways, with recurrent feedback loops playing a critical role. The development of diverse networks, particularly for motion processing, may explain changes in dynamic responses and resolve developmental data obtained with different methodologies. One possible role for these loops is in top-down attentional control of visual processing. (2) Why do hyperopic infants become strabismic (cross-eyes)? Binocular interaction is a particularly sensitive area of development. Standard clinical accounts suppose that long-sighted (hyperopic) refractive errors require accommodative effort, putting stress on the accommodation-convergence link that leads to its breakdown and strabismus. Our large-scale population screening studies of 9-month infants question this: hyperopic infants are at higher risk of strabismus and impaired vision (amblyopia and impaired attention) but these hyperopic infants often under- rather than over-accommodate. This poor accommodation may reflect poor early attention processing, possibly a ‘soft sign’ of subtle cerebral dysfunction. (3) What do many neurodevelopmental disorders have in common? Despite similar cognitive demands, global motion perception is much more impaired than global static form across diverse neurodevelopmental disorders including Down and Williams Syndromes, Fragile-X, Autism, children with premature birth and infants with perinatal brain injury. These deficits in motion processing are associated with deficits in other dorsal stream functions such as visuo-motor co-ordination and attentional control, a cluster we have called ‘dorsal stream vulnerability’. However, our neuroimaging measures related to motion coherence in typically developing children suggest that the critical areas for individual differences in global motion sensitivity are not early motion-processing areas such as V5/MT, but downstream parietal and frontal areas for decision processes on motion signals. Although these brain networks may also underlie attentional and visuo-motor deficits , we still do not know when and how these deficits differ across different disorders and between individual children. Answering these questions provide necessary steps, not only increasing our scientific understanding of human visual brain development, but also in designing appropriate interventions to help each child achieve their full potential.
Causal inference can explain hierarchical motion perception and is reflected in neural responses in MT
COSYNE 2022
Structure in motion: visual motion perception as online hierarchical inference
COSYNE 2022
Structure in motion: visual motion perception as online hierarchical inference
COSYNE 2022
Divisive normalization as a mechanism for hierarchical causal inference in motion perception
COSYNE 2023
Bayesian perceptual adaptation in auditory motion perception: A multimodal approach with EEG and pupillometry
FENS Forum 2024