Topic spotlight
TopicNeuro

eye

Discover seminars, jobs, and research tagged with eye across Neuro.
50 curated items50 Seminars
Updated 6 months ago
50 items · eye

Latest

50 results
SeminarNeuroscienceRecording

Seeing a changing world through the eyes of coral fishes

Fabio Cortesi
Queensland University
Jun 26, 2025
SeminarNeuroscience

Neural circuits underlying sleep structure and functions

Antoine Adamantidis
University of Bern
Jun 13, 2025

Sleep is an active state critical for processing emotional memories encoded during waking in both humans and animals. There is a remarkable overlap between the brain structures and circuits active during sleep, particularly rapid eye-movement (REM) sleep, and the those encoding emotions. Accordingly, disruptions in sleep quality or quantity, including REM sleep, are often associated with, and precede the onset of, nearly all affective psychiatric and mood disorders. In this context, a major biomedical challenge is to better understand the underlying mechanisms of the relationship between (REM) sleep and emotion encoding to improve treatments for mental health. This lecture will summarize our investigation of the cellular and circuit mechanisms underlying sleep architecture, sleep oscillations, and local brain dynamics across sleep-wake states using electrophysiological recordings combined with single-cell calcium imaging or optogenetics. The presentation will detail the discovery of a 'somato-dendritic decoupling'in prefrontal cortex pyramidal neurons underlying REM sleep-dependent stabilization of optimal emotional memory traces. This decoupling reflects a tonic inhibition at the somas of pyramidal cells, occurring simultaneously with a selective disinhibition of their dendritic arbors selectively during REM sleep. Recent findings on REM sleep-dependent subcortical inputs and neuromodulation of this decoupling will be discussed in the context of synaptic plasticity and the optimization of emotional responses in the maintenance of mental health.

SeminarNeuroscience

The Unconscious Eye: What Involuntary Eye Movements Reveal About Brain Processing

Yoram Bonneh
Bar-Ilan
Jun 10, 2025
SeminarNeuroscienceRecording

Restoring Sight to the Blind: Effects of Structural and Functional Plasticity

Noelle Stiles
Rutgers University
May 22, 2025

Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.

SeminarNeuroscienceRecording

Altered grid-like coding in early blind people and the role of vision in conceptual navigation

Roberto Bottini
CIMeC, University of Trento
Mar 6, 2025
SeminarNeuroscience

Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors

Prof. Dr. Ziad M. Hafed
Werner Reichardt Center for Integrative Neuroscience, and Hertie Institute for Clinical Brain Research University of Tübingen
Feb 12, 2025

The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.

SeminarNeuroscienceRecording

Dynamics of braille letter perception in blind readers

Santani Teng
Smith-Kettlewell Eye Research Institute
Jan 23, 2025
SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 29, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Mind Perception and Behaviour: A Study of Quantitative and Qualitative Effects

Alan Kingstone
University of British Columbia
Nov 19, 2024
SeminarNeuroscience

Imagining and seeing: two faces of prosopagnosia

Jason Barton
University of British Columbia
Nov 5, 2024
SeminarNeuroscience

Mapping the Brain‘s Visual Representations Using Deep Learning

Katrin Franke
Byers Eye Institute, Department of Ophthalmology, Stanford Medicine
Jun 6, 2024
SeminarNeuroscience

Stability of visual processing in passive and active vision

Tobias Rose
Institute of Experimental Epileptology and Cognition Research University of Bonn Medical Center
Mar 28, 2024

The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.

SeminarNeuroscienceRecording

Molecular Characterization of Retinal Cell Types: Insights into Evolutionary Origins and Regional Specializations

Yirong Peng
UCLA Stein Eye Institute
Mar 4, 2024
SeminarNeuroscience

Sensory Consequences of Visual Actions

Martin Rolfs
Humboldt-Universität zu Berlin
Dec 8, 2023

We use rapid eye, head, and body movements to extract information from a new part of the visual scene upon each new gaze fixation. But the consequences of such visual actions go beyond their intended sensory outcomes. On the one hand, intrinsic consequences accompany movement preparation as covert internal processes (e.g., predictive changes in the deployment of visual attention). On the other hand, visual actions have incidental consequences, side effects of moving the sensory surface to its intended goal (e.g., global motion of the retinal image during saccades). In this talk, I will present studies in which we investigated intrinsic and incidental sensory consequences of visual actions and their sensorimotor functions. Our results provide insights into continuously interacting top-down and bottom-up sensory processes, and they reify the necessity to study perception in connection to motor behavior that shapes its fundamental processes.

SeminarNeuroscienceRecording

The melanopsin mosaic: exploring the diversity of non-image forming retinal ganglion cells

Ben Sivyer
OHSU, Casey Eye Institute
Oct 30, 2023

In this talk, I will focus on recent work that has uncovered the diversity of intrinsically photosensitive retinal ganglion cells (ipRGCs). These are a unique type of retinal ganglion cell that contains the photopigment melanopsin. ipRGCs are the retinal neurons responsible for driving non-imaging forming behaviors and reflexes, such as circadian entrainment and pupil constriction, amongst many others. My lab has recently focused on uncovering the diversity of ipRGCs, their distribution throughout the mammalian retina, and their axon projections in the brain.

SeminarNeuroscienceRecording

How fly neurons compute the direction of visual motion

Axel Borst
Max-Planck-Institute for Biological Intelligence
Oct 9, 2023

Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.

SeminarNeuroscience

Restoring function in advanced disease with photoreceptor cell replacement therapy

Rachael Pearson
King's College London
Jun 13, 2023
SeminarNeuroscienceRecording

The development of visual experience

Linda Smith
Indiana University Bloomington
Jun 6, 2023

Vision and visual cognition is experience-dependent with likely multiple sensitive periods, but we know very little about statistics of visual experience at the scale of everyday life and how they might change with development. By traditional assumptions, the world at the massive scale of daily life presents pretty much the same visual statistics to all perceivers. I will present an overview our work on ego-centric vision showing that this is not the case. The momentary image received at the eye is spatially selective, dependent on the location, posture and behavior of the perceiver. If a perceiver’s location, possible postures and/or preferences for looking at some kinds of scenes over others are constrained, then their sampling of images from the world and thus the visual statistics at the scale of daily life could be biased. I will present evidence with respect to both low-level and higher level visual statistics about the developmental changes in the visual input over the first 18 months post-birth.

SeminarNeuroscienceRecording

From following dots to understanding scenes

Alexander Göttker
Giessen
May 2, 2023
SeminarNeuroscience

Learning through the eyes and ears of a child

Brenden Lake
NYU
Apr 21, 2023

Young children have sophisticated representations of their visual and linguistic environment. Where do these representations come from? How much knowledge arises through generic learning mechanisms applied to sensory data, and how much requires more substantive (possibly innate) inductive biases? We examine these questions by training neural networks solely on longitudinal data collected from a single child (Sullivan et al., 2020), consisting of egocentric video and audio streams. Our principal findings are as follows: 1) Based on visual only training, neural networks can acquire high-level visual features that are broadly useful across categorization and segmentation tasks. 2) Based on language only training, networks can acquire meaningful clusters of words and sentence-level syntactic sensitivity. 3) Based on paired visual and language training, networks can acquire word-referent mappings from tens of noisy examples and align their multi-modal conceptual systems. Taken together, our results show how sophisticated visual and linguistic representations can arise through data-driven learning applied to one child’s first-person experience.

SeminarNeuroscienceRecording

Applying Structural Alignment theory to Early Verb Learning

Jane Childers
Trinity University
Feb 2, 2023

Learning verbs is difficult and critical to learning one's native language. Children appear to benefit from seeing multiple events and comparing them to each other, and structural alignment theory provides a good theoretical framework to guide research into how preschool children may be comparing events as they learn new verbs. The talk will include 6 studies of early verb learning that make use of eye-tracking procedures as well as other behavioral (pointing) procedures, and that test key predictions from SA theory including the prediction that seeing similar examples before more varied examples helps observers learn how to compare (progressive alignment) and the prediction that when events have very low alignability with other events, that is one cue that the events should be ignored. Whether or how statistical learning may also be at work will be considered.

SeminarNeuroscienceRecording

Direction-selective ganglion cells in primate retina: a subcortical substrate for reflexive gaze stabilization?

Teresa Puthussery
University of California, Berkeley
Jan 23, 2023

To maintain a stable and clear image of the world, our eyes reflexively follow the direction in which a visual scene is moving. Such gaze stabilization mechanisms reduce image blur as we move in the environment. In non-primate mammals, this behavior is initiated by ON-type direction-selective ganglion cells (ON-DSGCs), which detect the direction of image motion and transmit signals to brainstem nuclei that drive compensatory eye movements. However, ON-DSGCs have not yet been functionally identified in primates, raising the possibility that the visual inputs that drive this behavior instead arise in the cortex. In this talk, I will present molecular, morphological and functional evidence for identification of an ON-DSGC in macaque retina. The presence of ON-DSGCs highlights the need to examine the contribution of subcortical retinal mechanisms to normal and aberrant gaze stabilization in the developing and mature visual system. More generally, our findings demonstrate the power of a multimodal approach to study sparsely represented primate RGC types.

SeminarNeuroscienceRecording

Visual Perception in Cerebral Visual Impairment (CVI)

Lotfi Merabet
Mass Eye and Ear, Harvard Medical School
Jan 19, 2023
SeminarNeuroscienceRecording

Mechanisms of relational structure mapping across analogy tasks

Adam Chuderski
Jagiellonian University
Jan 19, 2023

Following the seminal structure mapping theory by Dedre Gentner, the process of mapping the corresponding structures of relations defining two analogs has been understood as a key component of analogy making. However, not without a merit, in recent years some semantic, pragmatic, and perceptual aspects of analogy mapping attracted primary attention of analogy researchers. For almost a decade, our team have been re-focusing on relational structure mapping, investigating its potential mechanisms across various analogy tasks, both abstract (semantically-lean) and more concrete (semantically-rich), using diverse methods (behavioral, correlational, eye-tracking, EEG). I will present the overview of our main findings. They suggest that structure mapping (1) consists of an incremental construction of the ultimate mental representation, (2) which strongly depends on working memory resources and reasoning ability, (3) even if as little as a single trivial relation needs to be represented mentally. The effective mapping (4) is related to the slowest brain rhythm – the delta band (around 2-3 Hz) – suggesting its highly integrative nature. Finally, we have developed a new task – Graph Mapping – which involves pure mapping of two explicit relational structures. This task allows for precise investigation and manipulation of the mapping process in experiments, as well as is one of the best proxies of individual differences in reasoning ability. Structure mapping is as crucial to analogy as Gentner advocated, and perhaps it is crucial to cognition in general.

SeminarNeuroscienceRecording

Visual prostheses: from the eye to the brain

Diego Ghezzi
École polytechnique fédérale de Lausanne
Jan 10, 2023
SeminarNeuroscienceRecording

Electronic Visual Prostheses to Treat Blindness

Jim Weiland
University of Michigan
Nov 29, 2022
SeminarNeuroscience

How fly neurons compute the direction of visual motion

Alexander Borst
Max Planck Institute of Neurobiology - Martinsried
Nov 7, 2022

Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Our results obtained in the fruit fly Drosophila demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.

SeminarNeuroscience

Baby steps to breakthroughs in precision health in neurodevelopmental disorders

Shafali Spurling Jeste
Children's Hospital Los Angeles
Oct 26, 2022
SeminarNeuroscience

Real-world scene perception and search from foveal to peripheral vision

Antje Nuthmann
Kiel University
Oct 24, 2022

A high-resolution central fovea is a prominent design feature of human vision. But how important is the fovea for information processing and gaze guidance in everyday visual-cognitive tasks? Following on from classic findings for sentence reading, I will present key results from a series of eye-tracking experiments in which observers had to search for a target object within static or dynamic images of real-world scenes. Gaze-contingent scotomas were used to selectively deny information processing in the fovea, parafovea, or periphery. Overall, the results suggest that foveal vision is less important and peripheral vision is more important for scene perception and search than previously thought. The importance of foveal vision was found to depend on the specific requirements of the task. Moreover, the data support a central-peripheral dichotomy in which peripheral vision selects and central vision recognizes.

SeminarNeuroscience

Development and evolution of neuronal connectivity

Alain Chédotal
Vision Institute, Paris, France
Sep 28, 2022

In most animal species including humans, commissural axons connect neurons on the left and right side of the nervous system. In humans, abnormal axon midline crossing during development causes a whole range of neurological disorders ranging from congenital mirror movements, horizontal gaze palsy, scoliosis or binocular vision deficits. The mechanisms which guide axons across the CNS midline were thought to be evolutionary conserved but our recent results suggesting that they differ across vertebrates.  I will discuss the evolution of visual projection laterality during vertebrate evolution.  In most vertebrates, camera-style eyes contain retinal ganglion cell (RGC) neurons projecting to visual centers on both sides of the brain. However, in fish, RGCs are thought to only innervate the contralateral side. Using 3D imaging and tissue clearing we found that bilateral visual projections exist in non-teleost fishes. We also found that the developmental program specifying visual system laterality differs between fishes and mammals. We are currently using various strategies to discover genes controlling the development of visual projections. I will also present ongoing work using 3D imaging techniques to study the development of the visual system in human embryo.

SeminarNeuroscienceRecording

A neural mechanism for terminating decisions

Gabriel Stine
Shadlen Lab, Columbia University
Sep 21, 2022

The brain makes decisions by accumulating evidence until there is enough to stop and choose. Neural mechanisms of evidence accumulation are well established in association cortex, but the site and mechanism of termination is unknown. Here, we elucidate a mechanism for termination by neurons in the primate superior colliculus. We recorded simultaneously from neurons in lateral intraparietal cortex (LIP) and the superior colliculus (SC) while monkeys made perceptual decisions, reported by eye-movements. Single-trial analyses revealed distinct dynamics: LIP tracked the accumulation of evidence on each decision, and SC generated one burst at the end of the decision, occasionally preceded by smaller bursts. We hypothesized that the bursts manifest a threshold mechanism applied to LIP activity to terminate the decision. Focal inactivation of SC produced behavioral effects diagnostic of an impaired threshold sensor, requiring a stronger LIP signal to terminate a decision. The results reveal the transformation from deliberation to commitment.

SeminarNeuroscienceRecording

Seeing the world through moving photoreceptors - binocular photomechanical microsaccades give fruit fly hyperacute 3D-vision

Mikko Juusola
University of Sheffield
Aug 1, 2022

To move efficiently, animals must continuously work out their x,y,z positions with respect to real-world objects, and many animals have a pair of eyes to achieve this. How photoreceptors actively sample the eyes’ optical image disparity is not understood because this fundamental information-limiting step has not been investigated in vivo over the eyes’ whole sampling matrix. This integrative multiscale study will advance our current understanding of stereopsis from static image disparity comparison to a morphodynamic active sampling theory. It shows how photomechanical photoreceptor microsaccades enable Drosophila superresolution three-dimensional vision and proposes neural computations for accurately predicting these flies’ depth-perception dynamics, limits, and visual behaviors.

SeminarNeuroscience

Binocular combination of light

Daniel H. Baker
University of York (USA)
Jul 14, 2022

The brain combines signals across the eyes. This process is well-characterized for the perceptual anatomical pathway through V1 that primarily codes contrast, where interocular normalization ensures that responses are approximately equal for monocular and binocular stimulation. But we have much less understanding of how luminance is combined binocularly, both in the cortex and in subcortical structures that govern pupil diameter. Here I will describe the results of experiments using a novel combined EEG and pupillometry paradigm to simultaneously index binocular combination of luminance flicker in parallel pathways. The results show evidence of a more linear process than for spatial contrast, that may reflect different operational constraints in distinct anatomical pathways.

SeminarNeuroscience

Diurnal rhythms of the eye

Rigmor C. Baraas
University of South-Eastern Norway (Norway)
Jun 23, 2022

Do all components of the living human eye have a measurable diurnal rhythm? In this talk I will discuss methodologies and results of studies on adolescents and young adults. I will also touch upon the associations between diurnal rhythms of the eye and behavioral activities.

SeminarNeuroscience

Eyes wide shut, brain wide up!

Antoine Adamantidis
University of Bern, Department of Neurology, Switzerland
Jun 23, 2022
SeminarNeuroscience

Using eye tracking to investigate neural circuits in health and disease

Doug Munoz
Director, Centre for Neuroscience Studies & Professor, Biomedical & Molecular Sciences, Psychology & Medicine, Queen's University, Kingston, ON, Canada
Jun 14, 2022
SeminarNeuroscience

Perception during visual disruptions

Grace Edwards and Lina Teichmann
National Institute of Mental Health, Laboratory of Brain and Cognition, U.S. Department of Health and Human Services.
Jun 13, 2022

Visual perception is perceived as continuous despite frequent disruptions in our visual environment. For example, internal events, such as saccadic eye-movements, and external events, such as object occlusion temporarily prevent visual information from reaching the brain. Combining evidence from these two models of visual disruption (occlusion and saccades), we will describe what information is maintained and how it is updated across the sensory interruption. Lina Teichmann will focus on dynamic occlusion and demonstrate how object motion is processed through perceptual gaps. Grace Edwards will then describe what pre-saccadic information is maintained across a saccade and how it interacts with post-saccadic processing in retinotopically relevant areas of the early visual cortex. Both occlusion and saccades provide a window into how the brain bridges perceptual disruptions. Our evidence thus far suggests a role for extrapolation, integration, and potentially suppression in both models. Combining evidence from these typically separate fields enables us to determine if there is a set of mechanisms which support visual processing during visual disruptions in general.

SeminarNeuroscienceRecording

What the fly’s eye tells the fly’s brain…and beyond

Gwyneth Card
Janelia Research Campus, HHMI
Jun 1, 2022

Fly Escape Behaviors: Flexible and Modular We have identified a set of escape maneuvers performed by a fly when confronted by a looming object. These escape responses can be divided into distinct behavioral modules. Some of the modules are very stereotyped, as when the fly rapidly extends its middle legs to jump off the ground. Other modules are more complex and require the fly to combine information about both the location of the threat and its own body posture. In response to an approaching object, a fly chooses some varying subset of these behaviors to perform. We would like to understand the neural process by which a fly chooses when to perform a given escape behavior. Beyond an appealing set of behaviors, this system has two other distinct advantages for probing neural circuitry. First, the fly will perform escape behaviors even when tethered such that its head is fixed and neural activity can be imaged or monitored using electrophysiology. Second, using Drosophila as an experimental animal makes available a rich suite of genetic tools to activate, silence, or image small numbers of cells potentially involved in the behaviors. Neural Circuits for Escape Until recently, visually induced escape responses have been considered a hardwired reflex in Drosophila. White-eyed flies with deficient visual pigment will perform a stereotyped middle-leg jump in response to a light-off stimulus, and this reflexive response is known to be coordinated by the well-studied giant fiber (GF) pathway. The GFs are a pair of electrically connected, large-diameter interneurons that traverse the cervical connective. A single GF spike results in a stereotyped pattern of muscle potentials on both sides of the body that extends the fly's middle pair of legs and starts the flight motor. Recently, we have found that a fly escaping a looming object displays many more behaviors than just leg extension. Most of these behaviors could not possibly be coordinated by the known anatomy of the GF pathway. Response to a looming threat thus appears to involve activation of numerous different neural pathways, which the fly may decide if and when to employ. Our goal is to identify the descending pathways involved in coordinating these escape behaviors as well as the central brain circuits, if any, that govern their activation. Automated Single-Fly Screening We have developed a new kind of high-throughput genetic screen to automatically capture fly escape sequences and quantify individual behaviors. We use this system to perform a high-throughput genetic silencing screen to identify cell types of interest. Automation permits analysis at the level of individual fly movements, while retaining the capacity to screen through thousands of GAL4 promoter lines. Single-fly behavioral analysis is essential to detect more subtle changes in behavior during the silencing screen, and thus to identify more specific components of the contributing circuits than previously possible when screening populations of flies. Our goal is to identify candidate neurons involved in coordination and choice of escape behaviors. Measuring Neural Activity During Behavior We use whole-cell patch-clamp electrophysiology to determine the functional roles of any identified candidate neurons. Flies perform escape behaviors even when their head and thorax are immobilized for physiological recording. This allows us to link a neuron's responses directly to an action.

SeminarNeuroscienceRecording

Clinical neuroscience and the heart-brain axis (BACN Mid-career Prize Lecture 2021)

Sarah Garfinkel
Institute of Cognitive Neuroscience, UCL
May 24, 2022

Cognitive and emotional processes are shaped by the dynamic integration of brain and body. A major channel of interoceptive information comes from the heart, where phasic signals are conveyed to the brain to indicate how fast and strong the heart is beating. This talk will discuss how interoceptive processes operate across conscious and unconscious levels to influence emotion and memory. The interoceptive channel is disrupted in distinct ways in individuals with autism and anxiety. Selective interoceptive disturbance is related to symptomatology including dissociation and the transdiagnostic expression of anxiety. Interoceptive training can reduce anxiety, with enhanced interoceptive precision associated with greater insula connectivity following targeted interoceptive feedback. The discrete cardiac effects on emotion and cognition have broad relevance to clinical neuroscience, with implications for peripheral treatment targets and behavioural interventions.

SeminarNeuroscienceRecording

Why do some animals have more than two eyes?

Lauren Sumner-Rooney
Leibniz Institute for Research on Evolution and Biodiversity
May 9, 2022

The evolution of vision revolutionised animal biology, and eyes have evolved in a stunning array of diverse forms over the past half a billion years. Among these are curious duplicated visual systems, where eyes can be spread across the body and specialised for different tasks. Although it sounds radical, duplicated vision is found in most major groups across the animal kingdom, but remains poorly understood. We will explore how and why animals collect information about their environment in this unusual way, looking at examples from tropical forests to the sea floor, and from ancient arthropods to living jellyfish. Have we been short-changed with just two eyes? Dr Lauren Sumner-Rooney is a Research Fellow at the OUMNH studying the function and evolution of animal visual systems. Lauren completed her undergraduate degree at Oxford in 2012, and her PhD at Queen’s University Belfast in 2015. She worked as a research technician and science communicator at the Royal Veterinary College (2015-2016) and held a postdoctoral research fellowship at the Museum für Naturkunde, Berlin (2016-2017) before arriving at the Museum in 2017.

SeminarNeuroscienceRecording

The evolution and development of visual complexity: insights from stomatopod visual anatomy, physiology, behavior, and molecules

Megan Porter
University of Hawaii
May 2, 2022

Bioluminescence, which is rare on land, is extremely common in the deep sea, being found in 80% of the animals living between 200 and 1000 m. These animals rely on bioluminescence for communication, feeding, and/or defense, so the generation and detection of light is essential to their survival. Our present knowledge of this phenomenon has been limited due to the difficulty in bringing up live deep-sea animals to the surface, and the lack of proper techniques needed to study this complex system. However, new genomic techniques are now available, and a team with extensive experience in deep-sea biology, vision, and genomics has been assembled to lead this project. This project is aimed to study three questions 1) What are the evolutionary patterns of different types of bioluminescence in deep-sea shrimp? 2) How are deep-sea organisms’ eyes adapted to detect bioluminescence? 3) Can bioluminescent organs (called photophores) detect light in addition to emitting light? Findings from this study will provide valuable insight into a complex system vital to communication, defense, camouflage, and species recognition. This study will bring monumental contributions to the fields of deep sea and evolutionary biology, and immediately improve our understanding of bioluminescence and light detection in the marine environment. In addition to scientific advancement, this project will reach K-college aged students through the development and dissemination of educational tools, a series of molecular and organismal-based workshops, museum exhibits, public seminars, and biodiversity initiatives.

SeminarNeuroscienceRecording

Retinal responses to natural inputs

Fred Rieke
University of Washington
Apr 18, 2022

The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.

SeminarNeuroscienceRecording

Mutation targeted gene therapy approaches to alter rod degeneration and retain cones

Maureen McCall
University of Louisville
Mar 28, 2022

My research uses electrophysiological techniques to evaluate normal retinal function, dysfunction caused by blinding retinal diseases and the restoration of function using a variety of therapeutic strategies. We can use our understanding or normal retinal function and disease-related changes to construct optimal therapeutic strategies and evaluate how they ameliorate the effects of disease. Retinitis pigmentosa (RP) is a family of blinding eye diseases caused by photoreceptor degeneration. The absence of the cells that for this primary signal leads to blindness. My interest in RP involves the evaluation of therapies to restore vision: replacing degenerated photoreceptors either with: (1) new stem or other embryonic cells, manipulated to become photoreceptors or (2) prosthetics devices that replace the photoreceptor signal with an electronic signal to light. Glaucoma is caused by increased intraocular pressure and leads to ganglion cell death, which eliminates the link between the retinal output and central visual processing. We are parsing out of the effects of increased intraocular pressure and aging on ganglion cells. Congenital Stationary Night Blindness (CSNB) is a family of diseases in which signaling is eliminated between rod photoreceptors and their postsynaptic targets, rod bipolar cells. This deafferents the retinal circuit that is responsible for vision under dim lighting. My interest in CSNB involves understanding the basic interplay between excitation and inhibition in the retinal circuit and its normal development. Because of the targeted nature of this disease, we are hopeful that a gene therapy approach can be developed to restore night vision. My work utilizes rodent disease models whose mutations mimic those found in human patients. While molecular manipulation of rodents is a fairly common approach, we have recently developed a mutant NIH miniature swine model of a common form of autosomal dominant RP (Pro23His rhodopsin mutation) in collaboration with the National Swine Resource Research Center at University of Missouri. More genetically modified mini-swine models are in the pipeline to examine other retinal diseases.

SeminarNeuroscienceRecording

Taming chaos in neural circuits

Rainer Engelken
Columbia University
Feb 23, 2022

Neural circuits exhibit complex activity patterns, both spontaneously and in response to external stimuli. Information encoding and learning in neural circuits depend on the ability of time-varying stimuli to control spontaneous network activity. In particular, variability arising from the sensitivity to initial conditions of recurrent cortical circuits can limit the information conveyed about the sensory input. Spiking and firing rate network models can exhibit such sensitivity to initial conditions that are reflected in their dynamic entropy rate and attractor dimensionality computed from their full Lyapunov spectrum. I will show how chaos in both spiking and rate networks depends on biophysical properties of neurons and the statistics of time-varying stimuli. In spiking networks, increasing the input rate or coupling strength aids in controlling the driven target circuit, which is reflected in both a reduced trial-to-trial variability and a decreased dynamic entropy rate. With sufficiently strong input, a transition towards complete network state control occurs. Surprisingly, this transition does not coincide with the transition from chaos to stability but occurs at even larger values of external input strength. Controllability of spiking activity is facilitated when neurons in the target circuit have a sharp spike onset, thus a high speed by which neurons launch into the action potential. I will also discuss chaos and controllability in firing-rate networks in the balanced state. For these, external control of recurrent dynamics strongly depends on correlations in the input. This phenomenon was studied with a non-stationary dynamic mean-field theory that determines how the activity statistics and the largest Lyapunov exponent depend on frequency and amplitude of the input, recurrent coupling strength, and network size. This shows that uncorrelated inputs facilitate learning in balanced networks. The results highlight the potential of Lyapunov spectrum analysis as a diagnostic for machine learning applications of recurrent networks. They are also relevant in light of recent advances in optogenetics that allow for time-dependent stimulation of a select population of neurons.

SeminarNeuroscience

Visual and cross-modal plasticity in adult humans

Claudia Lunghi
Laboratoire des Systèmes Perceptifs, Ecole Normale Supérieure & CNRS, Paris, France
Feb 3, 2022

Neuroplasticity is a fundamental property of the nervous system that is maximal early in life, within a specific temporal window called critical period. However, it is still unclear to which extent the plastic potential of the visual cortex is retained in adulthood. We have surprisingly revealed residual ocular dominance plasticity in adult humans by showing that short-term monocular deprivation unexpectedly boosts the deprived eye (both at the perceptual and at the neural level), reflecting homeostatic plasticity. This effect is accompanied by a decrease of GABAergic inhibition in the primary visual cortex and can be modulated by non-visual factors (motor activity and motor plasticity). Finally, we have found that cross-modal plasticity is preserved in adult normal-sighted humans, as short-term monocular deprivation can alter early visuo-tactile interactions. Taken together, these results challenge the classical view of a hard-wired adult visual cortex, indicating that homeostatic plasticity can be reactivated in adult humans.

SeminarNeuroscienceRecording

Opponent processing in the expanded retinal mosaic of Nymphalid butterflies

Gregor Belušič
University of Ljubljana
Dec 13, 2021

In many butterflies, the ancestral trichromatic insect colour vision, based on UV-, blue- and green-sensitive photoreceptors, is extended with red-sensitive cells. Physiological evidence for red receptors has been missing in nymphalid butterflies, although some species can discriminate red hues well. In eight species from genera Archaeoprepona, Argynnis, Charaxes, Danaus, Melitaea, Morpho, Heliconius and Speyeria, we found a novel class of green-sensitive photoreceptors that have hyperpolarizing responses to stimulation with red light. These green-positive, red-negative (G+R–) cells are allocated to positions R1/2, normally occupied by UV and blue-sensitive cells. Spectral sensitivity, polarization sensitivity and temporal dynamics suggest that the red opponent units (R–) are the basal photoreceptors R9, interacting with R1/2 in the same ommatidia via direct inhibitory synapses. We found the G+R– cells exclusively in butterflies with red-shining ommatidia, which contain longitudinal screening pigments. The implementation of the red colour channel with R9 is different from pierid and papilionid butterflies, where cells R5–8 are the red receptors. The nymphalid red-green opponent channel and the potential for tetrachromacy seem to have been switched on several times during evolution, balancing between the cost of neural processing and the value of extended colour information.

SeminarNeuroscience

Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli

Helene Schreyer
Gollisch lab, University Medical Center Göttingen, Germany
Dec 9, 2021

Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.

SeminarNeuroscienceRecording

Spatial alignment supports visual comparisons

Nina Simms
Northwestern University
Dec 2, 2021

Visual comparisons are ubiquitous, and they can also be an important source for learning (e.g., Gentner et al., 2016; Kok et al., 2013). In science, technology, engineering, and math (STEM), key information is often conveyed through figures, graphs, and diagrams (Mayer, 1993). Comparing within and across visuals is critical for gleaning insight into the underlying concepts, structures, and processes that they represent. This talk addresses how people make visual comparisons and how visual comparisons can be best supported to improve learning. In particular, the talk will present a series of studies exploring the Spatial Alignment Principle (Matlen et al., 2020), derived from Structure-Mapping Theory (Gentner, 1983). Structure-mapping theory proposes that comparisons involve a process of finding correspondences between elements based on structured relationships. The Spatial Alignment Principle suggests that spatially arranging compared figures directly – to support correct correspondences and minimize interference from incorrect correspondences – will facilitate visual comparisons. We find that direct placement can facilitate visual comparison in educationally relevant stimuli, and that it may be especially important when figures are less familiar. We also present complementary evidence illustrating the preponderance of visual comparisons in 7th grade science textbooks.

eye coverage

50 items

Seminar50
Domain spotlight

Explore how eye research is advancing inside Neuro.

Visit domain