Eyes
eyes
Seeing a changing world through the eyes of coral fishes
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
Learning through the eyes and ears of a child
Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.
Direction-selective ganglion cells in primate retina: a subcortical substrate for reflexive gaze stabilization?
To maintain a stable and clear image of the world, our eyes reflexively follow the direction in which a visual scene is moving. Such gaze stabilization mechanisms reduce image blur as we move in the environment. In non-primate mammals, this behavior is initiated by ON-type direction-selective ganglion cells (ON-DSGCs), which detect the direction of image motion and transmit signals to brainstem nuclei that drive compensatory eye movements. However, ON-DSGCs have not yet been functionally identified in primates, raising the possibility that the visual inputs that drive this behavior instead arise in the cortex. In this talk, I will present molecular, morphological and functional evidence for identification of an ON-DSGC in macaque retina. The presence of ON-DSGCs highlights the need to examine the contribution of subcortical retinal mechanisms to normal and aberrant gaze stabilization in the developing and mature visual system. More generally, our findings demonstrate the power of a multimodal approach to study sparsely represented primate RGC types.
How fly neurons compute the direction of visual motion
Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Our results obtained in the fruit fly Drosophila demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.
Development and evolution of neuronal connectivity
In most animal species including humans, commissural axons connect neurons on the left and right side of the nervous system. In humans, abnormal axon midline crossing during development causes a whole range of neurological disorders ranging from congenital mirror movements, horizontal gaze palsy, scoliosis or binocular vision deficits. The mechanisms which guide axons across the CNS midline were thought to be evolutionary conserved but our recent results suggesting that they differ across vertebrates. I will discuss the evolution of visual projection laterality during vertebrate evolution. In most vertebrates, camera-style eyes contain retinal ganglion cell (RGC) neurons projecting to visual centers on both sides of the brain. However, in fish, RGCs are thought to only innervate the contralateral side. Using 3D imaging and tissue clearing we found that bilateral visual projections exist in non-teleost fishes. We also found that the developmental program specifying visual system laterality differs between fishes and mammals. We are currently using various strategies to discover genes controlling the development of visual projections. I will also present ongoing work using 3D imaging techniques to study the development of the visual system in human embryo.
Seeing the world through moving photoreceptors - binocular photomechanical microsaccades give fruit fly hyperacute 3D-vision
To move efficiently, animals must continuously work out their x,y,z positions with respect to real-world objects, and many animals have a pair of eyes to achieve this. How photoreceptors actively sample the eyes’ optical image disparity is not understood because this fundamental information-limiting step has not been investigated in vivo over the eyes’ whole sampling matrix. This integrative multiscale study will advance our current understanding of stereopsis from static image disparity comparison to a morphodynamic active sampling theory. It shows how photomechanical photoreceptor microsaccades enable Drosophila superresolution three-dimensional vision and proposes neural computations for accurately predicting these flies’ depth-perception dynamics, limits, and visual behaviors.
Binocular combination of light
The brain combines signals across the eyes. This process is well-characterized for the perceptual anatomical pathway through V1 that primarily codes contrast, where interocular normalization ensures that responses are approximately equal for monocular and binocular stimulation. But we have much less understanding of how luminance is combined binocularly, both in the cortex and in subcortical structures that govern pupil diameter. Here I will describe the results of experiments using a novel combined EEG and pupillometry paradigm to simultaneously index binocular combination of luminance flicker in parallel pathways. The results show evidence of a more linear process than for spatial contrast, that may reflect different operational constraints in distinct anatomical pathways.
The role of top-down mechanisms in gaze perception
Humans, as a social species, have an increased ability to detect and perceive visual elements involved in social exchanges, such as faces and eyes. The gaze, in particular, conveys information crucial for social interactions and social cognition. Researchers have hypothesized that in order to engage in dynamic face-to-face communication in real time, our brains must quickly and automatically process the direction of another person's gaze. There is evidence that direct gaze improves face encoding and attention capture and that direct gaze is perceived and processed more quickly than averted gaze. These results are summarized as the "direct gaze effect". However, in the recent literature, there is evidence to suggest that the mode of visual information processing modulates the direct gaze effect. In this presentation, I argue that top-down processing, and specifically the relevance of eye features to the task, promotes the early preferential processing of direct versus indirect gaze. On the basis of several recent evidences, I propose that low task relevance of eye features will prevent differences in eye direction processing between gaze directions because its encoding will be superficial. Differential processing of direct and indirect gaze will only occur when the eyes are relevant to the task. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of typical ERP markers such as P1, N170 or P300, we will measure lateralized ERPs (lERPS) such as lateralized N170 and N2pc, which are markers of early face encoding and attentional deployment respectively. I hypothesize that the relevance of the eye feature task is crucial in the direct gaze effect and propose to revisit previous studies, which had questioned the existence of the direct gaze effect. This claim will be illustrate with different past studies and recent preliminary data of my lab. Overall, I propose a systematic evaluation of the role of top-down processing in early direct gaze perception in order to understand the impact of context on gaze perception and, at a larger scope, on social cognition.
Eyes wide shut, brain wide up!
What the fly’s eye tells the fly’s brain…and beyond
Fly Escape Behaviors: Flexible and Modular We have identified a set of escape maneuvers performed by a fly when confronted by a looming object. These escape responses can be divided into distinct behavioral modules. Some of the modules are very stereotyped, as when the fly rapidly extends its middle legs to jump off the ground. Other modules are more complex and require the fly to combine information about both the location of the threat and its own body posture. In response to an approaching object, a fly chooses some varying subset of these behaviors to perform. We would like to understand the neural process by which a fly chooses when to perform a given escape behavior. Beyond an appealing set of behaviors, this system has two other distinct advantages for probing neural circuitry. First, the fly will perform escape behaviors even when tethered such that its head is fixed and neural activity can be imaged or monitored using electrophysiology. Second, using Drosophila as an experimental animal makes available a rich suite of genetic tools to activate, silence, or image small numbers of cells potentially involved in the behaviors. Neural Circuits for Escape Until recently, visually induced escape responses have been considered a hardwired reflex in Drosophila. White-eyed flies with deficient visual pigment will perform a stereotyped middle-leg jump in response to a light-off stimulus, and this reflexive response is known to be coordinated by the well-studied giant fiber (GF) pathway. The GFs are a pair of electrically connected, large-diameter interneurons that traverse the cervical connective. A single GF spike results in a stereotyped pattern of muscle potentials on both sides of the body that extends the fly's middle pair of legs and starts the flight motor. Recently, we have found that a fly escaping a looming object displays many more behaviors than just leg extension. Most of these behaviors could not possibly be coordinated by the known anatomy of the GF pathway. Response to a looming threat thus appears to involve activation of numerous different neural pathways, which the fly may decide if and when to employ. Our goal is to identify the descending pathways involved in coordinating these escape behaviors as well as the central brain circuits, if any, that govern their activation. Automated Single-Fly Screening We have developed a new kind of high-throughput genetic screen to automatically capture fly escape sequences and quantify individual behaviors. We use this system to perform a high-throughput genetic silencing screen to identify cell types of interest. Automation permits analysis at the level of individual fly movements, while retaining the capacity to screen through thousands of GAL4 promoter lines. Single-fly behavioral analysis is essential to detect more subtle changes in behavior during the silencing screen, and thus to identify more specific components of the contributing circuits than previously possible when screening populations of flies. Our goal is to identify candidate neurons involved in coordination and choice of escape behaviors. Measuring Neural Activity During Behavior We use whole-cell patch-clamp electrophysiology to determine the functional roles of any identified candidate neurons. Flies perform escape behaviors even when their head and thorax are immobilized for physiological recording. This allows us to link a neuron's responses directly to an action.
Why do some animals have more than two eyes?
The evolution of vision revolutionised animal biology, and eyes have evolved in a stunning array of diverse forms over the past half a billion years. Among these are curious duplicated visual systems, where eyes can be spread across the body and specialised for different tasks. Although it sounds radical, duplicated vision is found in most major groups across the animal kingdom, but remains poorly understood. We will explore how and why animals collect information about their environment in this unusual way, looking at examples from tropical forests to the sea floor, and from ancient arthropods to living jellyfish. Have we been short-changed with just two eyes? Dr Lauren Sumner-Rooney is a Research Fellow at the OUMNH studying the function and evolution of animal visual systems. Lauren completed her undergraduate degree at Oxford in 2012, and her PhD at Queen’s University Belfast in 2015. She worked as a research technician and science communicator at the Royal Veterinary College (2015-2016) and held a postdoctoral research fellowship at the Museum für Naturkunde, Berlin (2016-2017) before arriving at the Museum in 2017.
The evolution and development of visual complexity: insights from stomatopod visual anatomy, physiology, behavior, and molecules
Bioluminescence, which is rare on land, is extremely common in the deep sea, being found in 80% of the animals living between 200 and 1000 m. These animals rely on bioluminescence for communication, feeding, and/or defense, so the generation and detection of light is essential to their survival. Our present knowledge of this phenomenon has been limited due to the difficulty in bringing up live deep-sea animals to the surface, and the lack of proper techniques needed to study this complex system. However, new genomic techniques are now available, and a team with extensive experience in deep-sea biology, vision, and genomics has been assembled to lead this project. This project is aimed to study three questions 1) What are the evolutionary patterns of different types of bioluminescence in deep-sea shrimp? 2) How are deep-sea organisms’ eyes adapted to detect bioluminescence? 3) Can bioluminescent organs (called photophores) detect light in addition to emitting light? Findings from this study will provide valuable insight into a complex system vital to communication, defense, camouflage, and species recognition. This study will bring monumental contributions to the fields of deep sea and evolutionary biology, and immediately improve our understanding of bioluminescence and light detection in the marine environment. In addition to scientific advancement, this project will reach K-college aged students through the development and dissemination of educational tools, a series of molecular and organismal-based workshops, museum exhibits, public seminars, and biodiversity initiatives.
Opponent processing in the expanded retinal mosaic of Nymphalid butterflies
In many butterflies, the ancestral trichromatic insect colour vision, based on UV-, blue- and green-sensitive photoreceptors, is extended with red-sensitive cells. Physiological evidence for red receptors has been missing in nymphalid butterflies, although some species can discriminate red hues well. In eight species from genera Archaeoprepona, Argynnis, Charaxes, Danaus, Melitaea, Morpho, Heliconius and Speyeria, we found a novel class of green-sensitive photoreceptors that have hyperpolarizing responses to stimulation with red light. These green-positive, red-negative (G+R–) cells are allocated to positions R1/2, normally occupied by UV and blue-sensitive cells. Spectral sensitivity, polarization sensitivity and temporal dynamics suggest that the red opponent units (R–) are the basal photoreceptors R9, interacting with R1/2 in the same ommatidia via direct inhibitory synapses. We found the G+R– cells exclusively in butterflies with red-shining ommatidia, which contain longitudinal screening pigments. The implementation of the red colour channel with R9 is different from pierid and papilionid butterflies, where cells R5–8 are the red receptors. The nymphalid red-green opponent channel and the potential for tetrachromacy seem to have been switched on several times during evolution, balancing between the cost of neural processing and the value of extended colour information.
Neural network models of binocular depth perception
Our visual experience of living in a three-dimensional world is created from the information contained in the two-dimensional images projected into our eyes. The overlapping visual fields of the two eyes mean that their images are highly correlated, and that the small differences that are present represent an important cue to depth. Binocular neurons encode this information in a way that both maximises efficiency and optimises disparity tuning for the depth structures that are found in our natural environment. Neural network models provide a clear account of how these binocular neurons encode the local binocular disparity in images. These models can be expanded to multi-layer models that are sensitive to salient features of scenes, such as the orientations and discontinuities between surfaces. These deep neural network models have also shown the importance of binocular disparity for the segmentation of images into separate objects, in addition to the estimation of distance. These results demonstrate the usefulness of machine learning approaches as a tool for understanding biological vision.
The dynamics of temporal attention
Selection is the hallmark of attention: processing improves for attended items but is relatively impaired for unattended items. It is well known that visual spatial attention changes sensory signals and perception in this selective fashion. In the work I will present, we asked whether and how attentional selection happens across time. First, our experiments revealed that voluntary temporal attention (attention to specific points in time) is selective, resulting in perceptual tradeoffs across time. Second, we measured small eye movements called microsaccades and found that directing voluntary temporal attention increases the stability of the eyes in anticipation of an attended stimulus. Third, we developed a computational model of dynamic attention, which proposes specific mechanisms underlying temporal attention and its selectivity. Lastly, I will mention how we are testing predictions of the model with MEG. Altogether, this research shows how precisely timed voluntary attention helps manage inherent limits in visual processing across short time intervals, advancing our understanding of attention as a dynamic process.
The role of motion in localizing objects
Everything we see has a location. We know where things are before we know what they are. But how do we know where things are? Receptive fields in the visual system specify location but neural delays lead to serious errors whenever targets or eyes are moving. Motion may be the problem here but motion can also be the solution, correcting for the effects of delays and eye movements. To demonstrate this, I will present results from three motion illusions where perceived location differs radically from physical location. These help understand how and where position is coded. We first look at the effects of a target’s simple forward motion on its perceived location. Second, we look at perceived location of a target that has internal motion as well as forward motion. The two directions combine to produce an illusory path. This “double-drift” illusion strongly affects perceived position but, surprisingly, not eye movements or attention. Even more surprising, fMRI shows that the shifted percept does not emerge in the visual cortex but is seen instead in the frontal lobes. Finally, we report that a moving frame also shifts the perceived positions of dots flashed within it. Participants report the dot positions relative to the frame, as if the frame were not moving. These frame-induced position effects suggest a link to visual stability where we see a steady world despite massive displacements during saccades. These motion-based effects on perceived location lead to new insights concerning how and where position is coded in the brain.
Using opsin genes to see through the eyes of a fish
Many animals are highly visual. They view their world through photoreceptors sensitive to different wavelengths of light. Animal survival and optimal behavioral performance may select for varying photoreceptor sensitivities depending on animal habitat or visual tasks. Our goal is to understand what drives visual diversity from both an evolutionary and molecular perspective. The group of more than 2000 cichlid fish species are an ideal system for examining such diversity. Cichlid are a colorful group of fresh water fishes. They have undergone adaptive radiation throughout Africa and the new world and occur in rivers and lakes that vary in water clarity. They are also behaviorally complex, having diverse behaviors for foraging, mate choice and even parental care. As a result, cichlids have highly diverse visual systems with cone sensitivities shifting by 30-90 nm between species. Although this group has seven cone opsin genes, individual species differ in which subset of the cone opsins they express. Some species show developmental shifts in opsin expression, switching from shorter to longer wavelength opsins through ontogeny. Other species modify that developmental program to express just one of the sets, causing the large sensitivity differences. Cichlids are therefore natural mutants for opsin expression. We have used cichlid diversity to explore the relationship between visual sensitivities and ecology. We have also exploited the genomic power of the cichlid system to identify genes and mutations that cause opsin expression shifts. Ultimately, our goal is to learn how different cichlid species see the world and whether differences matter. Behavioral experiments suggest they do indeed use color vision to survive and thrive. Cichlids therefore are a unique model for exploring how visual systems evolve in a changing world.
Evolution of vision - The regular route and shortcuts
Eyes abound in the animal kingdom. Some are large as basketballs and others are just fractions of a millimetre. Eyes also come in many different types, such as the compound eyes of insects, the mirror eyes of scallopsor our own camera-like eyes. Common to all animal eyes is that they serve the same fundamental role of collecting external information for guidingthe animal’s behaviour. But behaviours vary tremendously across the animal kingdom, and it turns outthis is the key to understand how eyes evolved. The lecture will take a tour from the first animals that could only sense the presence of light, to those that saw the first crude image of the world and finally to animals that use acute vision for interacting with otheranimals. Amazingly, all these stages of eye evolution still exist in animals living today, and this is how we can unravel the evolution of behaviours that has been the driving force behind eye evolution
Photovoltaic Restoration of Sight in Age-related Macular Degeneration
Neural mechanisms of active vision in the marmoset monkey
Human vision relies on rapid eye movements (saccades) 2-3 times every second to bring peripheral targets to central foveal vision for high resolution inspection. This rapid sampling of the world defines the perception-action cycle of natural vision and profoundly impacts our perception. Marmosets have similar visual processing and eye movements as humans, including a fovea that supports high-acuity central vision. Here, I present a novel approach developed in my laboratory for investigating the neural mechanisms of visual processing using naturalistic free viewing and simple target foraging paradigms. First, we establish that it is possible to map receptive fields in the marmoset with high precision in visual areas V1 and MT without constraints on fixation of the eyes. Instead, we use an off-line correction for eye position during foraging combined with high resolution eye tracking. This approach allows us to simultaneously map receptive fields, even at the precision of foveal V1 neurons, while also assessing the impact of eye movements on the visual information encoded. We find that the visual information encoded by neurons varies dramatically across the saccade to fixation cycle, with most information localized to brief post-saccadic transients. In a second study we examined if target selection prior to saccades can predictively influence how foveal visual information is subsequently processed in post-saccadic transients. Because every saccade brings a target to the fovea for detailed inspection, we hypothesized that predictive mechanisms might prime foveal populations to process the target. Using neural decoding from laminar arrays placed in foveal regions of area MT, we find that the direction of motion for a fixated target can be predictively read out from foveal activity even before its post-saccadic arrival. These findings highlight the dynamic and predictive nature of visual processing during eye movements and the utility of the marmoset as a model of active vision. Funding sources: NIH EY030998 to JM, Life Sciences Fellowship to JY
Stereo vision in humans and insects
Stereopsis – deriving information about distance by comparing views from two eyes – is widespread in vertebrates but so far known in only class of invertebrates, the praying mantids. Understanding stereopsis which has evolved independently in such a different nervous system promises to shed light on the constraints governing any stereo system. Behavioral experiments indicate that insect stereopsis is functionally very different from that studied in vertebrates. Vertebrate stereopsis depends on matching up the pattern of contrast in the two eyes; it works in static scenes, and may have evolved in order to break camouflage rather than to detect distances. Insect stereopsis matches up regions of the image where the luminance is changing; it is insensitive to the detailed pattern of contrast and operates to detect the distance to a moving target. Work from my lab has revealed a network of neurons within the mantis brain which are tuned to binocular disparity, including some that project to early visual areas. This is in contrast to previous theories which postulated that disparity was computed only at a single, late stage, where visual information is passed down to motor neurons. Thus, despite their very different properties, the underlying neural mechanisms supporting vertebrate and insect stereopsis may be computationally more similar than has been assumed.
The Evolution of Looking and Seeing: New Insights from Colorful Jumping Spiders
During communication, alignment between signals and sensors can be critical. Signals are often best perceived from specific angles, and sensory systems can also exhibit strong directional biases. However, we know little about how animals establish and maintain such signaling alignment during communication. To investigate this, we characterized the spatial dynamics of visual courtship signal- ing in the jumping spider Habronattus pyrrithrix. The male performs forward-facing displays involving complex color and movement patterns, with distinct long- and short-range phases. The female views displays with 2 distinct eye types and can only perceive colors and fine patterns of male displays when they are presented in her frontal field of view. Whether and how courtship interactions pro- duce such alignment between male display and female field of view is unknown. We recorded relative positions and orientations of both actors throughout courtship and established the role of each sex in maintaining signaling alignment. Males always oriented their displays toward the female. However, when females were free to move, male displays were consistently aligned with female princi- pal eyes only during short-range courtship. When female position was fixed, signaling alignment consistently occurred during both phases, suggesting that female movement reduces communication efficacy. When female models were experimentally rotated to face away during courtship, males rarely repositioned themselves to re-align their display. However, males were more likely to present cer- tain display elements after females turned to face them. Thus, although signaling alignment is a function of both sexes, males appear to rely on female behavior for effective communication
The Blurry Beginnings: What nature’s strangest eyes tell us about the evolution of vision
Our study reveals the most elaborate opsin expression patterns ever described in any animal eye. In mantis shrimp, a pugnacious crustacean renowned for its visual sophistication, we found unexpected retinal expression patterns highlighting the potential for cryptic photoreceptor functional diversity, including single photoreceptors that coexpress opsins from different spectral clades and a single opsin with a putative nonvisual function important in color vision. This study demonstrates the evolutionary potential for increasing visual system functional diversity through opsin gene duplication and diversification, as well as changes in patterns of gene coexpression among photoreceptors and retinula cells. These results have significant implications for the function of other visual systems, particularly in arthropods where large numbers of retinally expressed opsins have been documented.
Young IBRO NextInNeuro Webinar - The retinal basis of colour vision: from fish to humans
Colour vision is based on circuit-level comparison of the signals from spectral distinct types of photoreceptors. In our own eyes, the presence of three types of cones enable trichromatic colour vision. However, many phylogenetically ‘older’ vertebrates have four or more cone types, and in almost all their cases the circuits that enable tetra- or possibly even pentachromatic colour vision are not known. This includes the majority of birds, reptiles, amphibians, and bony fish. In the lab we study neuronal circuits for colour vision in non-mammalian vertebrates, with a focus on zebrafish, a tetrachromatic surface dwelling species of teleost. I will discuss how in the case of zebrafish, retinal colour computations are implemented in a fundamentally different, and probably much more efficient way compared to how they are thought to work in humans. I will then highlight how these fish circuits might be linked with those in mammals, possibly providing a new way of thinking about how circuits for colour vision are organized in vertebrates.
How our biases may influence our study of visual modalities: Two tales from the sea
It has long been appreciated (and celebrated) that certain species have sensory capabilities that humans do not share, for example polarization, ultraviolet, and infrared vision. What is less appreciated however, is that our position as terrestrial human scientists can significantly affect our study of animal senses and signals, even within modalities that we do share. For example, our acute vision can lead us to over-interpret the relevance of fine patterns in animals with coarser vision, and our Cartesian heritage as scientists can lead us to divide sensory modalities into orthogonal parameters (e.g. hue and brightness for color vision), even though this division may not exist within the animal itself. This talk examines two cases from marine visual ecology where a reconsideration of our biases as sharp-eyed Cartesian land mammals can help address questions in visual ecology. The first case examines the enormous variation in visual acuity among animals with image-forming eyes, and focuses on how acknowledging the typically poorer resolving power of animals can help us interpret the function of color patterns in cleaner shrimp and their client fish. The second case examines the how the typical human division of polarized light stimuli into angle and degree of polarization is problematic, and how a physiologically relevant interpretation is both closer to the truth and resolves a number of issues, particularly when considering the propagation of polarized light
The When, Where and What of visual memory formation
The eyes send a continuous stream of about two million nerve fibers to the brain, but only a fraction of this information is stored as visual memories. This talk will detail three neurocomputational models that attempt an understanding how the visual system makes on-the-fly decisions about how to encode that information. First, the STST family of models (Bowman & Wyble 2007; Wyble, Potter, Bowman & Nieuwenstein 2011) proposes mechanisms for temporal segmentation of continuous input. The conclusion of this work is that the visual system has mechanisms for rapidly creating brief episodes of attention that highlight important moments in time, and also separates each episode from temporally adjacent neighbors to benefit learning. Next, the RAGNAROC model (Wyble et al. 2019) describes a decision process for determining the spatial focus (or foci) of attention in a spatiotopic field and the neural mechanisms that provide enhancement of targets and suppression of highly distracting information. This work highlights the importance of integrating behavioral and electrophysiological data to provide empirical constraints on a neurally plausible model of spatial attention. The model also highlights how a neural circuit can make decisions in a continuous space, rather than among discrete alternatives. Finally, the binding pool (Swan & Wyble 2014; Hedayati, O’Donnell, Wyble in Prep) provides a mechanism for selectively encoding specific attributes (i.e. color, shape, category) of a visual object to be stored in a consolidated memory representation. The binding pool is akin to a holographic memory system that layers representations of select latent representations corresponding to different attributes of a given object. Moreover, it can bind features into distinct objects by linking them to token placeholders. Future work looks toward combining these models into a coherent framework for understanding the full measure of on-the-fly attentional mechanisms and how they improve learning.
The developing visual brain – answers and questions
We will start our talk with a short video of our research, illustrating methods (some old and new) and findings that have provided our current understanding of how visual capabilities develop in infancy and early childhood. However, our research poses some outstanding questions. We will briefly discuss three issues, which are linked by a common focus on the development of visual attentional processing: (1) How do recurrent cortical loops contribute to development? Cortical selectivity (e.g., to orientation, motion, and binocular disparity) develops in the early months of life. However, these systems are not purely feedforward but depend on parallel pathways, with recurrent feedback loops playing a critical role. The development of diverse networks, particularly for motion processing, may explain changes in dynamic responses and resolve developmental data obtained with different methodologies. One possible role for these loops is in top-down attentional control of visual processing. (2) Why do hyperopic infants become strabismic (cross-eyes)? Binocular interaction is a particularly sensitive area of development. Standard clinical accounts suppose that long-sighted (hyperopic) refractive errors require accommodative effort, putting stress on the accommodation-convergence link that leads to its breakdown and strabismus. Our large-scale population screening studies of 9-month infants question this: hyperopic infants are at higher risk of strabismus and impaired vision (amblyopia and impaired attention) but these hyperopic infants often under- rather than over-accommodate. This poor accommodation may reflect poor early attention processing, possibly a ‘soft sign’ of subtle cerebral dysfunction. (3) What do many neurodevelopmental disorders have in common? Despite similar cognitive demands, global motion perception is much more impaired than global static form across diverse neurodevelopmental disorders including Down and Williams Syndromes, Fragile-X, Autism, children with premature birth and infants with perinatal brain injury. These deficits in motion processing are associated with deficits in other dorsal stream functions such as visuo-motor co-ordination and attentional control, a cluster we have called ‘dorsal stream vulnerability’. However, our neuroimaging measures related to motion coherence in typically developing children suggest that the critical areas for individual differences in global motion sensitivity are not early motion-processing areas such as V5/MT, but downstream parietal and frontal areas for decision processes on motion signals. Although these brain networks may also underlie attentional and visuo-motor deficits , we still do not know when and how these deficits differ across different disorders and between individual children. Answering these questions provide necessary steps, not only increasing our scientific understanding of human visual brain development, but also in designing appropriate interventions to help each child achieve their full potential.
OpenEyeSim 2.1: Rendering Depth-of-Field and Chromatic Aberration Faster than Real-Time Simulations of Visual Accommodation
Bernstein Conference 2024
Eyes on the future: Unveiling mental simulations as a deliberative decision-making mechanism
FENS Forum 2024