← Back

Visual Systems

Topic spotlight
TopicWorld Wide

visual systems

Discover seminars, jobs, and research tagged with visual systems across World Wide.
16 curated items15 Seminars1 ePoster
Updated almost 2 years ago
16 items · visual systems
16 results
SeminarPsychology

Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making

Izzy Wisher
Aarhus University
Feb 25, 2024

How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.

SeminarNeuroscienceRecording

Why do some animals have more than two eyes?

Lauren Sumner-Rooney
Leibniz Institute for Research on Evolution and Biodiversity
May 8, 2022

The evolution of vision revolutionised animal biology, and eyes have evolved in a stunning array of diverse forms over the past half a billion years. Among these are curious duplicated visual systems, where eyes can be spread across the body and specialised for different tasks. Although it sounds radical, duplicated vision is found in most major groups across the animal kingdom, but remains poorly understood. We will explore how and why animals collect information about their environment in this unusual way, looking at examples from tropical forests to the sea floor, and from ancient arthropods to living jellyfish. Have we been short-changed with just two eyes? Dr Lauren Sumner-Rooney is a Research Fellow at the OUMNH studying the function and evolution of animal visual systems. Lauren completed her undergraduate degree at Oxford in 2012, and her PhD at Queen’s University Belfast in 2015. She worked as a research technician and science communicator at the Royal Veterinary College (2015-2016) and held a postdoctoral research fellowship at the Museum für Naturkunde, Berlin (2016-2017) before arriving at the Museum in 2017.

SeminarNeuroscienceRecording

NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity

Meenakshi Khosla
Massachusetts Institute of Technology
Nov 30, 2021

Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.

SeminarNeuroscienceRecording

Target detection in the natural world

Karin Nordstrom
Flinders University
Nov 14, 2021

Animal sensory systems are optimally adapted to those features typically encountered in natural surrounds, thus allowing neurons that have a limited bandwidth to encode almost impossibly large input ranges. Importantly, natural scenes are not random, and peripheral visual systems have therefore evolved to reduce the predictable redundancy. The vertebrate visual cortex is also optimally tuned to the spatial statistics of natural scenes, but much less is known about how the insect brain responds to these. We are redressing this deficiency using several techniques. Olga Dyakova uses exquisite image manipulation to give natural images unnatural image statistics, or vice versa. Marissa Holden then uses these images as stimuli in electrophysiological recordings of neurons in the fly optic lobes, to see how the brain codes for the statistics typically encountered in natural scenes, and Olga Dyakova measures the behavioral optomotor response on our trackball set-up.

SeminarNeuroscience

An optimal population code for global motion estimation in local direction-selective cells

Miriam Henning
Silies lab, University of Mainz, Germany
Nov 3, 2021

Neuronal computations are matched to optimally encode the sensory information that is available and relevant for the animal. However, the physical distribution of sensory information is often shaped by the animal’s own behavior. One prominent example is the encoding of optic flow fields that are generated during self-motion of the animal and will therefore depend on the type of locomotion. How evolution has matched computational resources to the behavioral constraints of an animal is not known. Here we use in vivo two photon imaging to record from a population of >3.500 local-direction selective cells. Our data show that the local direction-selective T4/T5 neurons in Drosophila form a population code that is matched to represent optic flow fields generated during translational and rotational self-motion of the fly. This coding principle for optic flow is reminiscent to the population code of local direction-selective ganglion cells in the mouse retina, where four direction-selective ganglion cells encode four different axes of self-motion encountered during walking (Sabbah et al., 2017). However, in flies we find six different subtypes of T4 and T5 cells that, at the population level, represent six axes of self-motion of the fly. The four uniformly tuned T4/T5 subtypes described previously represent a local snapshot (Maisak et al. 2013). The encoding of six types of optic flow in the fly as compared to four types of optic flow in mice might be matched to the high degrees of freedom encountered during flight. Thus, a population code for optic flow appears to be a general coding principle of visual systems, resulting from convergent evolution, but matching the individual ethological constraints of the animal.

SeminarNeuroscience

What Art can tell us about the Brain

Margaret Livingstone
Harvard
Oct 4, 2021

Artists have been doing experiments on vision longer than neurobiologists. Some major works of art have provided insights as to how we see; some of these insights are so undamental that they can be understood in terms of the underlying neurobiology. For example, artists have long realized that color and luminance can play independent roles in visual perception. Picasso said, "Colors are only symbols. Reality is to be found in luminance alone." This observation has a parallel in the functional subdivision of our visual systems, where color and luminance are processed by the evolutionarily newer, primate-specific What system, and the older, colorblind, Where (or How) system. Many techniques developed over the centuries by artists can be understood in terms of the parallel organization of our visual systems. I will explore how the segregation of color and luminance processing are the basis for why some Impressionist paintings seem to shimmer, why some op art paintings seem to move, some principles of Matisse's use of color, and how the Impressionists painted "air". Central and peripheral vision are distinct, and I will show how the differences in resolution across our visual field make the Mona Lisa's smile elusive, and produce a dynamic illusion in Pointillist paintings, Chuck Close paintings, and photomosaics. I will explore how artists have figured out important features about how our brains extract relevant information about faces and objects, and I will discuss why learning disabilities may be associated with artistic talent.

SeminarNeuroscienceRecording

Using opsin genes to see through the eyes of a fish

Karen Carleton
University of Maryland
Jul 25, 2021

Many animals are highly visual. They view their world through photoreceptors sensitive to different wavelengths of light. Animal survival and optimal behavioral performance may select for varying photoreceptor sensitivities depending on animal habitat or visual tasks. Our goal is to understand what drives visual diversity from both an evolutionary and molecular perspective. The group of more than 2000 cichlid fish species are an ideal system for examining such diversity. Cichlid are a colorful group of fresh water fishes. They have undergone adaptive radiation throughout Africa and the new world and occur in rivers and lakes that vary in water clarity. They are also behaviorally complex, having diverse behaviors for foraging, mate choice and even parental care. As a result, cichlids have highly diverse visual systems with cone sensitivities shifting by 30-90 nm between species. Although this group has seven cone opsin genes, individual species differ in which subset of the cone opsins they express. Some species show developmental shifts in opsin expression, switching from shorter to longer wavelength opsins through ontogeny. Other species modify that developmental program to express just one of the sets, causing the large sensitivity differences. Cichlids are therefore natural mutants for opsin expression. We have used cichlid diversity to explore the relationship between visual sensitivities and ecology. We have also exploited the genomic power of the cichlid system to identify genes and mutations that cause opsin expression shifts. Ultimately, our goal is to learn how different cichlid species see the world and whether differences matter. Behavioral experiments suggest they do indeed use color vision to survive and thrive. Cichlids therefore are a unique model for exploring how visual systems evolve in a changing world.

SeminarNeuroscience

The 2021 Annual Bioengineering Lecture + Bioinspired Guidance, Navigation and Control Symposium

Prof Mandyam V. Srinivasan, Dr Stefan Leutenegger, Dr Basil el Jundi, Dr Einat Couzin-Fuchs, Dr Josh Merel, Dr Huai-Ti Lin
May 25, 2021

Join the Department of Bioengineering on the 26th May at 9:00am for The 2021 Annual Bioengineering Lecture + Bioinspired Guidance, Navigation and Control Symposium. This year’s lecture speaker will be distinguished bioengineer and neuroscientist Professor Mandyam V. Srinivasan AM FRS, from the University of Queensland. Professor Srinivasan studies visual systems, particularly those of bees and birds. His research has revealed how flying insects negotiate narrow gaps, regulate the height and speed of flight, estimate distance flown, and orchestrate smooth landings. Apart from enhancing fundamental knowledge, these findings are leading to novel, biologically inspired approaches to the design of guidance systems for unmanned aerial vehicles with applications in the areas of surveillance, security and planetary exploration. Following Professor Srinivasan’s lecture will be the Bioinspired GNC Mini Symposium with guest speakers from Google Deepmind, Imperial College London, the University of Würzburg and the University of Konstanz giving talks on their research into autonomous robot navigation, neural mechanisms of compass orientation in insects and computational approaches to motor control.

SeminarNeuroscienceRecording

The Dark Side of Vision: Resolving the Neural Code

Petri Ala-Laurila
Aalto University
Apr 5, 2021

All sensory information – like what we see, hear and smell – gets encoded in spike trains by sensory neurons and gets sent to the brain. Due to the complexity of neural circuits and the difficulty of quantifying complex animal behavior, it has been exceedingly hard to resolve how the brain decodes these spike trains to drive behavior. We now measure quantal signals originating from sparse photons through the most sensitive neural circuits of the mammalian retina and correlate the retinal output spike trains with precisely quantified behavioral decisions. We utilize a combination of electrophysiological measurements on the most sensitive ON and OFF retinal ganglion cell types and a novel deep-learning based tracking technology of the head and body positions of freely-moving mice. We show that visually-guided behavior relies on information from the retinal ON pathway for the dimmest light increments and on information from the retinal OFF pathway for the dimmest light decrements (“quantal shadows”). Our results show that the distribution of labor between ON and OFF pathways starts already at starlight supporting distinct pathway-specific visual computations to drive visually-guided behavior. These results have several fundamental consequences for understanding how the brain integrates information across parallel information streams as well as for understanding the limits of sensory signal processing. In my talk, I will discuss some of the most eminent consequences including the extension of this “Quantum Behavior” paradigm from mouse vision to monkey and human visual systems.

SeminarNeuroscienceRecording

Receptor Costs Determine Retinal Design

Simon Laughlin
University of Cambridge
Jan 24, 2021

Our group is interested in discovering design principles that govern the structure and function of neurons and neural circuits. We record from well-defined neurons, mainly in flies’ visual systems, to measure the molecular and cellular factors that determine relevant measures of performance, such as representational capacity, dynamic range and accuracy. We combine this empirical approach with modelling to see how the basic elements of neural systems (ion channels, second messengers systems, membranes, synapses, neurons, circuits and codes) combine to determine performance. We are investigating four general problems. How are circuits designed to integrate information efficiently? How do sensory adaptation and synaptic plasticity contribute to efficiency? How do the sizes of neurons and networks relate to energy consumption and representational capacity? To what extent have energy costs shaped neurons, sense organs and brain regions during evolution?

SeminarNeuroscienceRecording

A Rare Visuospatial Disorder

Aimee Dollman
University of Cape Town
Aug 25, 2020

Cases with visuospatial abnormalities provide opportunities for understanding the underlying cognitive mechanisms. Three cases of visual mirror-reversal have been reported: AH (McCloskey, 2009), TM (McCloskey, Valtonen, & Sherman, 2006) and PR (Pflugshaupt et al., 2007). This research reports a fourth case, BS -- with focal occipital cortical dysgenesis -- who displays highly unusual visuospatial abnormalities. They initially produced mirror reversal errors similar to those of AH, who -- like the patient in question -- showed a selective developmental deficit. Extensive examination of BS revealed phenomena such as: mirror reversal errors (sometimes affecting only parts of the visual fields) in both horizontal and vertical planes; subjective representation of visual objects and words in distinct left and right visual fields; subjective duplication of objects of visual attention (not due to diplopia); uncertainty regarding the canonical upright orientation of everyday objects; mirror reversals during saccadic eye movements on oculomotor tasks; and failure to integrate visual with other sensory inputs (e.g., they feel themself moving backwards when visual information shows they are moving forward). Fewer errors are produced under conditions of certain visual variables. These and other findings have led the researchers to conclude that BS draws upon a subjective representation of visual space that is structured phenomenally much as it is anatomically in early visual cortex (i.e., rotated through 180 degrees, split into left and right fields, etc.). Despite this, BS functions remarkably well in their everyday life, apparently due to extensive compensatory mechanisms deployed at higher (executive) processing levels beyond the visual modality.

SeminarNeuroscienceRecording

Motion vision in Drosophila: from single neuron computation to behaviour

Michael Reiser
Janelia Research Campus
May 19, 2020

How nervous systems control behaviour is the main question we seek to answer in neuroscience. Although visual systems have been a popular entry point into the brain, we don’t understand—in any deep sense—how visual perception guides navigation in flies (or any organism). I will present recent progress towards this goal from our lab. We are using anatomical insights from connectomics, genetic methods for labelling and manipulating identified cell types, neurophysiology, behaviour, and computational modeling to explain how the fly brain processes visual motion to regulate behaviour.

SeminarNeuroscienceRecording

Vision in dynamically changing environments

Marion Silies
Johannes Gutenberg-Universität Mainz, Germany
May 17, 2020

Many visual systems can process information in dynamically changing environments. In general, visual perception scales with changes in the visual stimulus, or contrast, irrespective of background illumination. This is achieved by adaptation. However, visual perception is challenged when adaptation is not fast enough to deal with sudden changes in overall illumination, for example when gaze follows a moving object from bright sunlight into a shaded area. We have recently shown that the visual system of the fly found a solution by propagating a corrective luminance-sensitive signal to higher processing stages. Using in vivo two-photon imaging and behavioural analyses we showed that distinct OFF-pathway inputs encode contrast and luminance. The luminance-sensitive pathway is particularly required when processing visual motion in contextual dim light, when pure contrast sensitivity underestimates the salience of a stimulus. Recent work in the lab has addressed the question how two visual pathways obtain such fundamentally different sensitivities, given common photoreceptor input. We are furthermore currently working out the network-based strategies by which luminance- and contrast-sensitive signals are combined to guide appropriate visual behaviour. Together, I will discuss the molecular, cellular, and circuit mechanisms that ensure contrast computation, and therefore robust vision, in fast changing visual scenes.

ePoster

Directly comparing fly and mouse visual systems reveals algorithmic similarities for motion detection

Caitlin Gish, Damon Clark, Juyue Chen, James Fransen, Emilio Salazar-Gatzimas, Bart Borghuis

COSYNE 2023