motion processing
Latest
Motion processing across visual field locations in zebrafish
It’s All About Motion: Functional organization of the multisensory motion system at 7T
The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will suggest that this mechanism may play a role in evaluating multisensory responses for visual, tactile and auditory stimuli in hMT+.
On the contributions of retinal direction selectivity to cortical motion processing in mice
Cells preferentially responding to visual motion in a particular direction are said to be direction-selective, and these were first identified in the primary visual cortex. Since then, direction-selective responses have been observed in the retina of several species, including mice, indicating motion analysis begins at the earliest stage of the visual hierarchy. Yet little is known about how retinal direction selectivity contributes to motion processing in the visual cortex. In this talk, I will present our experimental efforts to narrow this gap in our knowledge. To this end, we used genetic approaches to disrupt direction selectivity in the retina and mapped neuronal responses to visual motion in the visual cortex of mice using intrinsic signal optical imaging and two-photon calcium imaging. In essence, our work demonstrates that direction selectivity computed at the level of the retina causally serves to establish specialized motion responses in distinct areas of the mouse visual cortex. This finding thus compels us to revisit our notions of how the brain builds complex visual representations and underscores the importance of the processing performed in the periphery of sensory systems.
Context-dependent motion processing in the retina
A critical function of sensory systems is to reliably extract ethologically relevant features from the complex natural environment. A classic model to study feature detection is the direction-selective circuit of the mammalian retina. In this talk, I will discuss our recent work on how visual contexts dynamically influence the neural processing of motion signals in the direction-selective circuit in the mouse retina.
Synergy of color and motion vision for detecting approaching objects in Drosophila
I am working on color vision in Drosophila, identifying behaviors that involve color vision and understanding the neural circuits supporting them (Longden 2016). I have a long-term interest in understanding how neural computations operate reliably under changing circumstances, be they external changes in the sensory context, or internal changes of state such as hunger and locomotion. On internal state-modulation of sensory processing, I have shown how hunger alters visual motion processing in blowflies (Longden et al. 2014), and identified a role for octopamine in modulating motion vision during locomotion (Longden and Krapp 2009, 2010). On responses to external cues, I have shown how one kind of uncertainty in the motion of the visual scene is resolved by the fly (Saleem, Longden et al. 2012), and I have identified novel cells for processing translation-induced optic flow (Longden et al. 2017). I like working with colleagues who use different model systems, to get at principles of neural operation that might apply in many species (Ding et al. 2016, Dyakova et al. 2015). I like work motivated by computational principles - my background is computational neuroscience, with a PhD on models of memory formation in the hippocampus (Longden and Willshaw, 2007).
A transdiagnostic data-driven study of children’s behaviour and the functional connectome
Behavioural difficulties are seen as hallmarks of many neurodevelopmental conditions. Differences in functional brain organisation have been observed in these conditions, but little is known about how they are related to a child’s profile of behavioural difficulties. We investigated whether behavioural difficulties are associated with how the brain is functionally organised in an intentionally heterogeneous and transdiagnostic sample of 957 children aged 5-15. We used consensus community detection to derive data-driven profiles of behavioural difficulties and constructed functional connectomes from a subset of 238 children with resting-state functional Magnetic Resonance Imaging (fMRI) data. We identified three distinct profiles of behaviour that were characterised by principal difficulties with hot executive function, cool executive function, and learning. Global organisation of the functional connectome did not differ between the groups, but multivariate patterns of connectivity at the level of Intrinsic Connectivity Networks (ICNs), nodes, and hubs significantly predicted group membership in held-out data. Fronto-parietal connector hubs were under-connected in all groups relative to a comparison sample, and children with hot vs cool executive function difficulties were distinguished by connectivity in ICNs associated with cognitive control, emotion processing, and social cognition. This demonstrates both general and specific neurodevelopmental risk factors in the functional connectome. (https://www.medrxiv.org/content/10.1101/2021.09.15.21262637v1)
An optimal population code for global motion estimation in local direction-selective cells
Neuronal computations are matched to optimally encode the sensory information that is available and relevant for the animal. However, the physical distribution of sensory information is often shaped by the animal’s own behavior. One prominent example is the encoding of optic flow fields that are generated during self-motion of the animal and will therefore depend on the type of locomotion. How evolution has matched computational resources to the behavioral constraints of an animal is not known. Here we use in vivo two photon imaging to record from a population of >3.500 local-direction selective cells. Our data show that the local direction-selective T4/T5 neurons in Drosophila form a population code that is matched to represent optic flow fields generated during translational and rotational self-motion of the fly. This coding principle for optic flow is reminiscent to the population code of local direction-selective ganglion cells in the mouse retina, where four direction-selective ganglion cells encode four different axes of self-motion encountered during walking (Sabbah et al., 2017). However, in flies we find six different subtypes of T4 and T5 cells that, at the population level, represent six axes of self-motion of the fly. The four uniformly tuned T4/T5 subtypes described previously represent a local snapshot (Maisak et al. 2013). The encoding of six types of optic flow in the fly as compared to four types of optic flow in mice might be matched to the high degrees of freedom encountered during flight. Thus, a population code for optic flow appears to be a general coding principle of visual systems, resulting from convergent evolution, but matching the individual ethological constraints of the animal.
Do you hear what I see: Auditory motion processing in blind individuals
Perception of object motion is fundamentally multisensory, yet little is known about similarities and differences in the computations that give rise to our experience across senses. Insight can be provided by examining auditory motion processing in early blind individuals. In those who become blind early in life, the ‘visual’ motion area hMT+ responds to auditory motion. Meanwhile, the planum temporale, associated with auditory motion in sighted individuals, shows reduced selectivity for auditory motion, suggesting competition between cortical areas for functional role. According to the metamodal hypothesis of cross-modal plasticity developed by Pascual-Leone, the recruitment of hMT+ is driven by it being a metamodal structure containing “operators that execute a given function or computation regardless of sensory input modality”. Thus, the metamodal hypothesis predicts that the computations underlying auditory motion processing in early blind individuals should be analogous to visual motion processing in sighted individuals - relying on non-separable spatiotemporal filters. Inconsistent with the metamodal hypothesis, evidence suggests that the computational algorithms underlying auditory motion processing in early blind individuals fail to undergo a qualitative shift as a result of cross-modal plasticity. Auditory motion filters, in both blind and sighted subjects, are separable in space and time, suggesting that the recruitment of hMT+ to extract motion information from auditory input includes a significant modification of its normal computational operations.
Novel Object Detection and Multiplexed Motion Representation in Retinal Bipolar Cells
Detection of motion is essential for survival, but how the visual system processes moving stimuli is not fully understood. Here, based on a detailed analysis of glutamate release from bipolar cells, we outline the rules that govern the representation of object motion in the early processing stages. Our main findings are as follows: (1) Motion processing begins already at the first retinal synapse. (2) The shape and the amplitude of motion responses cannot be reliably predicted from bipolar cell responses to stationary objects. (3) Enhanced representation of novel objects - particularly in bipolar cells with transient dynamics. (4) Response amplitude in bipolar cells matches visual salience reported in humans: suddenly appearing objects > novel motion > existing motion. These findings can be explained by antagonistic interactions in the center-surround receptive field, demonstrate that despite their simple operational concepts, classical center-surround receptive fields enable sophisticated visual computations.
Bipolar cell motion processing in the retina
Is it Autism or Alexithymia? explaining atypical socioemotional processing
Emotion processing is thought to be impaired in autism and linked to atypical visual exploration and arousal modulation to others faces and gaze, yet evidence is equivocal. We propose that, where observed, atypical socioemotional processing is due to alexithymia, a distinct but frequently co-occurring condition which affects emotional self-awareness and Interoception. In study 1 (N = 80), we tested this hypothesis by studying the spatio-temporal dynamics and entropy of eye-gaze during emotion processing tasks. Evidence from traditional and novel methods revealed that atypical eye-gaze and emotion recognition is best predicted by alexithymia in both autistic and non-autistic individuals. In Study 2 (N = 70), we assessed interoceptive and autonomic signals implicated in socioemotional processing, and found evidence for alexithymia (not autism) driven effects on gaze and arousal modulation to emotions. We also conducted two large-scale studies (N = 1300), using confirmatory factor-analytic and network modelling and found evidence that Alexithymia and Autism are distinct at both a latent level and their intercorrelations. We argue that: 1) models of socioemotional processing in autism should conceptualise difficulties as intrinsic to alexithymia, and 2) assessment of alexithymia is crucial for diagnosis and personalised interventions in autism.
Predicting the future from the past: Motion processing in the primate retina
The Manookin lab is investigating the structure and function of neural circuits within the retina and developing techniques for treating blindness. Many blinding diseases, such as retinitis pigmentosa, cause death of the rods and cones, but spare other cell types within the retina. Thus, many techniques for restoring visual function following blindness are based on the premise that other cells within the retina remain viable and capable of performing their various roles in visual processing. There are more than 80 different neuronal types in the human retina and these form the components of the specialized circuits that transform the signals from photoreceptors into a neural code responsible for our perception of color, form, and motion, and thus visual experience. The Manookin laboratory is investigating the function and connectivity of neural circuits in the retina using a variety of techniques including electrophysiology, calcium imaging, and electron microscopy. This knowledge is being used to develop more effective techniques for restoring visual function following blindness.
The developing visual brain – answers and questions
We will start our talk with a short video of our research, illustrating methods (some old and new) and findings that have provided our current understanding of how visual capabilities develop in infancy and early childhood. However, our research poses some outstanding questions. We will briefly discuss three issues, which are linked by a common focus on the development of visual attentional processing: (1) How do recurrent cortical loops contribute to development? Cortical selectivity (e.g., to orientation, motion, and binocular disparity) develops in the early months of life. However, these systems are not purely feedforward but depend on parallel pathways, with recurrent feedback loops playing a critical role. The development of diverse networks, particularly for motion processing, may explain changes in dynamic responses and resolve developmental data obtained with different methodologies. One possible role for these loops is in top-down attentional control of visual processing. (2) Why do hyperopic infants become strabismic (cross-eyes)? Binocular interaction is a particularly sensitive area of development. Standard clinical accounts suppose that long-sighted (hyperopic) refractive errors require accommodative effort, putting stress on the accommodation-convergence link that leads to its breakdown and strabismus. Our large-scale population screening studies of 9-month infants question this: hyperopic infants are at higher risk of strabismus and impaired vision (amblyopia and impaired attention) but these hyperopic infants often under- rather than over-accommodate. This poor accommodation may reflect poor early attention processing, possibly a ‘soft sign’ of subtle cerebral dysfunction. (3) What do many neurodevelopmental disorders have in common? Despite similar cognitive demands, global motion perception is much more impaired than global static form across diverse neurodevelopmental disorders including Down and Williams Syndromes, Fragile-X, Autism, children with premature birth and infants with perinatal brain injury. These deficits in motion processing are associated with deficits in other dorsal stream functions such as visuo-motor co-ordination and attentional control, a cluster we have called ‘dorsal stream vulnerability’. However, our neuroimaging measures related to motion coherence in typically developing children suggest that the critical areas for individual differences in global motion sensitivity are not early motion-processing areas such as V5/MT, but downstream parietal and frontal areas for decision processes on motion signals. Although these brain networks may also underlie attentional and visuo-motor deficits , we still do not know when and how these deficits differ across different disorders and between individual children. Answering these questions provide necessary steps, not only increasing our scientific understanding of human visual brain development, but also in designing appropriate interventions to help each child achieve their full potential.
Motion processing across visual field locations in zebrafish
Animals are able to perceive self-motion and navigate in their environment using optic flow information. They often perform visually guided stabilization behaviors like the optokinetic (OKR) or optomotor response (OMR) in order to maintain their eye and body position relative to the moving surround. But how does the animal manage to perform appropriate behavioral response and how are processing tasks divided between the various non-cortical visual brain areas? Experiments have shown that the zebrafish pretectum, which is homologous to the mammalian accessory optic system, is involved in the OKR and OMR. The optic tectum (superior colliculus in mammals) is involved in processing of small stimuli, e.g. during prey capture. We have previously shown that many pretectal neurons respond selectively to rotational or translational motion. These neurons are likely detectors for specific optic flow patterns and mediate behavioral choices of the animal based on optic flow information. We investigate the motion feature extraction of brain structures that receive input from retinal ganglion cells to identify the visual computations that underlie behavioral decisions during prey capture, OKR, OMR and other visually mediate behaviors. Our study of receptive fields shows that receptive field sizes in pretectum (large) and tectum (small) are very different and that pretectal responses are diverse and anatomically organized. Since calcium indicators are slow and receptive fields for motion stimuli are difficult to measure, we also develop novel stimuli and statistical methods to infer the neuronal computations of visual brain areas.
The inside-out of emotion processing: Evaluating children and adults’ neural correlates from a novel fMRI movie-watching paradigm
FENS Forum 2024
motion processing coverage
15 items