TopicNeuro

visual stimuli

23 Seminars4 ePosters1 Position

Latest

PositionNeuroscience

Kendrick Kay

Center for Magnetic Resonance Research, University of Minnesota
University of Minnesota
Jan 12, 2026

The lab of Dr. Kendrick Kay at the Center for Magnetic Resonance Research at the University of Minnesota is recruiting one or more postdocs. The lab seeks to integrate broad interdisciplinary insights to understand function in the visual system. One postdoc position is on a newly funded NIH R01 to develop, design, and collect a large-scale 7T fMRI dataset that samples a wide range of cognitive tasks on a common set of visual stimuli. The project is being conducted in close collaboration with co-PI Dr. Clayton Curtis (New York University). Activities in this grant include either (i) designing, collecting, and analyzing the large-scale neuroimaging dataset, (ii) technical work focused on extending and expanding the GLMsingle analysis method, and/or (iii) other related experimental or modeling work in visual/cognitive neuroscience. Another postdoc position is aimed towards integrating fMRI and intracranial EEG measurements during visual tasks (NSD-iEEG) and electrical stimulation. The general goal of this effort is to better understand signaling across the visual hierarchy (from early visual to higher order areas ventral temporal cortex and frontal/parietal areas). This project is in collaboration with PI Dr. Dora Hermes (Mayo Clinic).

SeminarNeuroscienceRecording

Rodents to Investigate the Neural Basis of Audiovisual Temporal Processing and Perception

Ashley Schormans
BrainsCAN, Western University, Canada.
Sep 27, 2023

To form a coherent perception of the world around us, we are constantly processing and integrating sensory information from multiple modalities. In fact, when auditory and visual stimuli occur within ~100 ms of each other, individuals tend to perceive the stimuli as a single event, even though they occurred separately. In recent years, our lab, and others, have developed rat models of audiovisual temporal perception using behavioural tasks such as temporal order judgments (TOJs) and synchrony judgments (SJs). While these rodent models demonstrate metrics that are consistent with humans (e.g., perceived simultaneity, temporal acuity), we have sought to confirm whether rodents demonstrate the hallmarks of audiovisual temporal perception, such as predictable shifts in their perception based on experience and sensitivity to alterations in neurochemistry. Ultimately, our findings indicate that rats serve as an excellent model to study the neural mechanisms underlying audiovisual temporal perception, which to date remains relativity unknown. Using our validated translational audiovisual behavioural tasks, in combination with optogenetics, neuropharmacology and in vivo electrophysiology, we aim to uncover the mechanisms by which inhibitory neurotransmission and top-down circuits finely control ones’ perception. This research will significantly advance our understanding of the neuronal circuitry underlying audiovisual temporal perception, and will be the first to establish the role of interneurons in regulating the synchronized neural activity that is thought to contribute to the precise binding of audiovisual stimuli.

SeminarNeuroscienceRecording

Nature over Nurture: Functional neuronal circuits emerge in the absence of developmental activity

Dániel L. Barabási
Engert lab, MCB Harvard University
Apr 5, 2023

During development, the complex neuronal circuitry of the brain arises from limited information contained in the genome. After the genetic code instructs the birth of neurons, the emergence of brain regions, and the formation of axon tracts, it is believed that neuronal activity plays a critical role in shaping circuits for behavior. Current AI technologies are modeled after the same principle: connections in an initial weight matrix are pruned and strengthened by activity-dependent signals until the network can sufficiently generalize a set of inputs into outputs. Here, we challenge these learning-dominated assumptions by quantifying the contribution of neuronal activity to the development of visually guided swimming behavior in larval zebrafish. Intriguingly, dark-rearing zebrafish revealed that visual experience has no effect on the emergence of the optomotor response (OMR). We then raised animals under conditions where neuronal activity was pharmacologically silenced from organogenesis onward using the sodium-channel blocker tricaine. Strikingly, after washout of the anesthetic, animals performed swim bouts and responded to visual stimuli with 75% accuracy in the OMR paradigm. After shorter periods of silenced activity OMR performance stayed above 90% accuracy, calling into question the importance and impact of classical critical periods for visual development. Detailed quantification of the emergence of functional circuit properties by brain-wide imaging experiments confirmed that neuronal circuits came ‘online’ fully tuned and without the requirement for activity-dependent plasticity. Thus, contrary to what you learned on your mother's knee, complex sensory guided behaviors can be wired up innately by activity-independent developmental mechanisms.

SeminarNeuroscience

Decoding rapidly presented visual stimuli from prefrontal ensembles without report nor post-perceptual processing

Joachim Bellet
Mar 10, 2023
SeminarNeuroscienceRecording

Connecting performance benefits on visual tasks to neural mechanisms using convolutional neural networks

Grace Lindsay
New York University (NYU)
Dec 7, 2022

Behavioral studies have demonstrated that certain task features reliably enhance classification performance for challenging visual stimuli. These include extended image presentation time and the valid cueing of attention. Here, I will show how convolutional neural networks can be used as a model of the visual system that connects neural activity changes with such performance changes. Specifically, I will discuss how different anatomical forms of recurrence can account for better classification of noisy and degraded images with extended processing time. I will then show how experimentally-observed neural activity changes associated with feature attention lead to observed performance changes on detection tasks. I will also discuss the implications these results have for how we identify the neural mechanisms and architectures important for behavior.

SeminarNeuroscience

It’s All About Motion: Functional organization of the multisensory motion system at 7T

Anna Gaglianese
Laboratory for Investigative Neurophysiology, CHUV, Lausanne & The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
Nov 15, 2022

The human middle temporal complex (hMT+) has a crucial biological relevance for the processing and detection of direction and speed of motion in visual stimuli. In both humans and monkeys, it has been extensively investigated in terms of its retinotopic properties and selectivity for direction of moving stimuli; however, only in recent years there has been an increasing interest in how neurons in MT encode the speed of motion. In this talk, I will explore the proposed mechanism of speed encoding questioning whether hMT+ neuronal populations encode the stimulus speed directly, or whether they separate motion into its spatial and temporal components. I will characterize how neuronal populations in hMT+ encode the speed of moving visual stimuli using electrocorticography ECoG and 7T fMRI. I will illustrate that the neuronal populations measured in hMT+ are not directly tuned to stimulus speed, but instead encode speed through separate and independent spatial and temporal frequency tuning. Finally, I will suggest that this mechanism may play a role in evaluating multisensory responses for visual, tactile and auditory stimuli in hMT+.

SeminarNeuroscienceRecording

A transcriptomic axis predicts state modulation of cortical interneurons

Stephane Bugeon
Harris & Carandini's lab, UCL
Apr 27, 2022

Transcriptomics has revealed that cortical inhibitory neurons exhibit a great diversity of fine molecular subtypes, but it is not known whether these subtypes have correspondingly diverse activity patterns in the living brain. We show that inhibitory subtypes in primary visual cortex (V1) have diverse correlates with brain state, but that this diversity is organized by a single factor: position along their main axis of transcriptomic variation. We combined in vivo 2-photon calcium imaging of mouse V1 with a novel transcriptomic method to identify mRNAs for 72 selected genes in ex vivo slices. We classified inhibitory neurons imaged in layers 1-3 into a three-level hierarchy of 5 Subclasses, 11 Types, and 35 Subtypes using previously-defined transcriptomic clusters. Responses to visual stimuli differed significantly only across Subclasses, suppressing cells in the Sncg Subclass while driving cells in the other Subclasses. Modulation by brain state differed at all hierarchical levels but could be largely predicted from the first transcriptomic principal component, which also predicted correlations with simultaneously recorded cells. Inhibitory Subtypes that fired more in resting, oscillatory brain states have less axon in layer 1, narrower spikes, lower input resistance and weaker adaptation as determined in vitro and express more inhibitory cholinergic receptors. Subtypes firing more during arousal had the opposite properties. Thus, a simple principle may largely explain how diverse inhibitory V1 Subtypes shape state-dependent cortical processing.

SeminarNeuroscienceRecording

Neural signature for accumulated evidence underlying temporal decisions

Nir Ofir
The Hebrew University of Jerusalem
Dec 16, 2021

Cognitive models of timing often include a pacemaker analogue whose ticks are accumulated to form an internal representation of time, and a threshold that determines when a target duration has elapsed. However, clear EEG manifestations of these abstract components have not yet been identified. We measured the EEG of subjects while they performed a temporal bisection task in which they were requested to categorize visual stimuli as short or long in duration. We report an ERP component whose amplitude depends monotonically on the stimulus duration. The relation of the ERP amplitude and stimulus duration can be captured by a simple model, adapted from a known drift-diffusion model for time perception. It includes a noisy accumulator that starts with the stimulus onset and a threshold. If the threshold is reached during stimulus presentation, the stimulus is categorized as "long", otherwise the stimulus is categorized as "short". At the stimulus offset, a response proportional to the distance to the threshold is emitted. This simple model has two parameters that fit both the behavior and ERP amplitudes recorded in the task. Two subsequent experiments replicate and extend this finding to another modality (touch) as well as to different time ranges (subsecond and suprasecond), establishing the described ERP component as a useful handle on the cognitive processes involved in temporal decisions.

SeminarNeuroscience

Nonlinear spatial integration in retinal bipolar cells shapes the encoding of artificial and natural stimuli

Helene Schreyer
Gollisch lab, University Medical Center Göttingen, Germany
Dec 9, 2021

Vision begins in the eye, and what the “retina tells the brain” is a major interest in visual neuroscience. To deduce what the retina encodes (“tells”), computational models are essential. The most important models in the retina currently aim to understand the responses of the retinal output neurons – the ganglion cells. Typically, these models make simplifying assumptions about the neurons in the retinal network upstream of ganglion cells. One important assumption is linear spatial integration. In this talk, I first define what it means for a neuron to be spatially linear or nonlinear and how we can experimentally measure these phenomena. Next, I introduce the neurons upstream to retinal ganglion cells, with focus on bipolar cells, which are the connecting elements between the photoreceptors (input to the retinal network) and the ganglion cells (output). This pivotal position makes bipolar cells an interesting target to study the assumption of linear spatial integration, yet due to their location buried in the middle of the retina it is challenging to measure their neural activity. Here, I present bipolar cell data where I ask whether the spatial linearity holds under artificial and natural visual stimuli. Through diverse analyses and computational models, I show that bipolar cells are more complex than previously thought and that they can already act as nonlinear processing elements at the level of their somatic membrane potential. Furthermore, through pharmacology and current measurements, I illustrate that the observed spatial nonlinearity arises at the excitatory inputs to bipolar cells. In the final part of my talk, I address the functional relevance of the nonlinearities in bipolar cells through combined recordings of bipolar and ganglion cells and I show that the nonlinearities in bipolar cells provide high spatial sensitivity to downstream ganglion cells. Overall, I demonstrate that simple linear assumptions do not always apply and more complex models are needed to describe what the retina “tells” the brain.

SeminarNeuroscienceRecording

How does seeing help listening? Audiovisual integration in Auditory Cortex

Jennifer Bizley
University College London
Dec 2, 2021

Multisensory responses are ubiquitous in so-called unisensory cortex. However, despite their prevalence, we have very little understanding of what – if anything - they contribute to perception. In this talk I will focus on audio-visual integration in auditory cortex. Anatomical tracing studies highlight visual cortex as one source of visual input to auditory cortex. Using cortical cooling we test the hypothesis that these inputs support audiovisual integration in ferret auditory cortex. Behavioural studies in humans support the idea that visual stimuli can help listeners to parse an auditory scene. This effect is paralleled in single units in auditory cortex, where responses to a sound mixture can be determined by the timing of a visual stimulus such that sounds that are temporally coherent with a visual stimulus are preferentially represented. Our recent data therefore support the idea that one role for the early integration of auditory and visual signals in auditory cortex is to support auditory scene analysis, and that visual cortex plays a key role in this process.

SeminarNeuroscienceRecording

Feature selectivity can explain mismatch signals in mouse visual cortex

Tomaso Muzzu
Saleem lab, University College London
Oct 20, 2021

Sensory experience often depends on one’s own actions, including self-motion. Theories of predictive coding postulate that actions are regulated by calculating prediction error, which is the difference between sensory experience and expectation based on self-generated actions. Signals consistent with prediction error have been reported in mouse visual cortex (V1) when visual flow coupled to running was unexpectedly stopped. Here, we show such signals can be elicited by visual stimuli uncoupled to animal’s running. We recorded V1 neurons while presenting drifting gratings that unexpectedly stopped. We found strong responses to visual perturbations, which were enhanced during running. Perturbation responses were strongest in the preferred orientation of individual neurons and perturbation responsive neurons were more likely to prefer slow visual speeds. Our results indicate that prediction error signals can be explained by the convergence of known motor and sensory signals, providing a purely sensory and motor explanation for purported mismatch signals.

SeminarNeuroscience

- CANCELLED -

Selina Solomon
Kohn lab, Albert Einstein College of Medicine; Growth Intelligence, UK
Oct 20, 2021

A recent formulation of predictive coding theory proposes that a subset of neurons in each cortical area encodes sensory prediction errors, the difference between predictions relayed from higher cortex and the sensory input. Here, we test for evidence of prediction error responses in spiking responses and local field potentials (LFP) recorded in primary visual cortex and area V4 of macaque monkeys, and in complementary electroencephalographic (EEG) scalp recordings in human participants. We presented a fixed sequence of visual stimuli on most trials, and violated the expected ordering on a small subset of trials. Under predictive coding theory, pattern-violating stimuli should trigger robust prediction errors, but we found that spiking, LFP and EEG responses to expected and pattern-violating stimuli were nearly identical. Our results challenge the assertion that a fundamental computational motif in sensory cortex is to signal prediction errors, at least those based on predictions derived from temporal patterns of visual stimulation.

SeminarNeuroscienceRecording

Towards a Theory of Human Visual Reasoning

Ekaterina Shurkova
University of Edinburgh
Oct 14, 2021

Many tasks that are easy for humans are difficult for machines. In particular, while humans excel at tasks that require generalising across problems, machine systems notably struggle. One such task that has received a good amount of attention is the Synthetic Visual Reasoning Test (SVRT). The SVRT consists of a range of problems where simple visual stimuli must be categorised into one of two categories based on an unknown rule that must be induced. Conventional machine learning approaches perform well only when trained to categorise based on a single rule and are unable to generalise without extensive additional training to tasks with any additional rules. Multiple theories of higher-level cognition posit that humans solve such tasks using structured relational representations. Specifically, people learn rules based on structured representations that generalise to novel instances quickly and easily. We believe it is possible to model this approach in a single system which learns all the required relational representations from scratch and performs tasks such as SVRT in a single run. Here, we present a system which expands the DORA/LISA architecture and augments the existing model with principally novel components, namely a) visual reasoning based on the established theories of recognition by components; b) the process of learning complex relational representations by synthesis (in addition to learning by analysis). The proposed augmented model matches human behaviour on SVRT problems. Moreover, the proposed system stands as perhaps a more realistic account of human cognition, wherein rather than using tools that has been shown successful in the machine learning field to inform psychological theorising, we use established psychological theories to inform developing a machine system.

SeminarNeuroscienceRecording

What is the function of auditory cortex when it develops in the absence of acoustic input?

Steve Lomber
McGill University
Oct 14, 2021

Cortical plasticity is the neural mechanism by which the cerebrum adapts itself to its environment, while at the same time making it vulnerable to impoverished sensory or developmental experiences. Like the visual system, auditory development passes through a series of sensitive periods in which circuits and connections are established and then refined by experience. Current research is expanding our understanding of cerebral processing and organization in the deaf. In the congenitally deaf, higher-order areas of "deaf" auditory cortex demonstrate significant crossmodal plasticity with neurons responding to visual and somatosensory stimuli. This crucial cerebral function results in compensatory plasticity. Not only can the remaining inputs reorganize to substitute for those lost, but this additional circuitry also confers enhanced abilities to the remaining systems. In this presentation we will review our present understanding of the structure and function of “deaf” auditory cortex using psychophysical, electrophysiological, and connectional anatomy approaches and consider how this knowledge informs our expectations of the capabilities of cochlear implants in the developing brain.

SeminarNeuroscience

Understanding the role of prediction in sensory encoding

Jason Mattingley
Monash Biomedical Imaging
Jul 29, 2021

At any given moment the brain receives more sensory information than it can use to guide adaptive behaviour, creating the need for mechanisms that promote efficient processing of incoming sensory signals. One way in which the brain might reduce its sensory processing load is to encode successive presentations of the same stimulus in a more efficient form, a process known as neural adaptation. Conversely, when a stimulus violates an expected pattern, it should evoke an enhanced neural response. Such a scheme for sensory encoding has been formalised in predictive coding theories, which propose that recent experience establishes expectations in the brain that generate prediction errors when violated. In this webinar, Professor Jason Mattingley will discuss whether the encoding of elementary visual features is modulated when otherwise identical stimuli are expected or unexpected based upon the history of stimulus presentation. In humans, EEG was employed to measure neural activity evoked by gratings of different orientations, and multivariate forward modelling was used to determine how orientation selectivity is affected for expected versus unexpected stimuli. In mice, two-photon calcium imaging was used to quantify orientation tuning of individual neurons in the primary visual cortex to expected and unexpected gratings. Results revealed enhanced orientation tuning to unexpected visual stimuli, both at the level of whole-brain responses and for individual visual cortex neurons. Professor Mattingley will discuss the implications of these findings for predictive coding theories of sensory encoding. Professor Jason Mattingley is a Laureate Fellow and Foundation Chair in Cognitive Neuroscience at The University of Queensland. His research is directed toward understanding the brain processes that support perception, selective attention and decision-making, in health and disease.

SeminarNeuroscience

Faces influence saccade programming

Nathalie Guyader
Grenoble Institute of Technology
Jun 9, 2021

Several studies have showed that face stimuli elicit extremely fast and involuntary saccadic responses toward them, relative to other categories of visual stimuli. In the talk, I will mainly focus on a quite recent research done in our team that investigated to what extent face stimuli influence the programming and execution of saccades. In this research, two experiments were performed using a saccadic choice task: two images (one with a face, one with a vehicle) were simultaneously displayed in the left and right visual fields of participants who had to execute a saccade toward the image (Experiment 1) or toward a cross added in the center of the image (Experiment 2) containing a target stimulus (a face or a vehicle). As expected participants were faster to execute a saccade toward a face than toward a vehicle and did less errors. We also observed shorter saccades toward vehicle than face targets, even if participants were explicitly asked to perform their saccades toward a specific location (Experiment 2). Further analyses, that I will detailed in the talk, showed that error saccades might be interrupted in mid-fight to initiate a concurrently programmed corrective saccade.

SeminarNeuroscienceRecording

Natural visual stimuli for mice

Thomas Euler
University of Tübingen
Mar 16, 2021
SeminarNeuroscienceRecording

Global visual salience of competing stimuli

Alex Hernandez-Garcia
Université de Montréal
Dec 10, 2020

Current computational models of visual salience accurately predict the distribution of fixations on isolated visual stimuli. It is not known, however, whether the global salience of a stimulus, that is its effectiveness in the competition for attention with other stimuli, is a function of the local salience or an independent measure. Further, do task and familiarity with the competing images influence eye movements? In this talk, I will present the analysis of a computational model of the global salience of natural images. We trained a machine learning algorithm to learn the direction of the first saccade of participants who freely observed pairs of images. The pairs balanced the combinations of new and already seen images, as well as task and task-free trials. The coefficients of the model provided a reliable measure of the likelihood of each image to attract the first fixation when seen next to another image, that is their global salience. For example, images of close-up faces and images containing humans were consistently looked first and were assigned higher global salience. Interestingly, we found that global salience cannot be explained by the feature-driven local salience of images, the influence of task and familiarity was rather small and we reproduced the previously reported left-sided bias. This computational model of global salience allows to analyse multiple other aspects of human visual perception of competing stimuli. In the talk, I will also present our latest results from analysing the saccadic reaction time as a function of the global salience of the pair of images.

SeminarNeuroscience

Effects of Corticothalamic Feedback on Geniculate Responses to Naturalistic and Artificial Visual Stimuli

Laura Busse
Division of Neurobiology, Ludwig-Maximilians-Universität München, Germany
Nov 16, 2020
SeminarNeuroscienceRecording

The emergence of contrast invariance in cortical circuits

Tatjana Tchumatchenko
Max Planck Institute for Brain Research
Nov 13, 2020

Neurons in the primary visual cortex (V1) encode the orientation and contrast of visual stimuli through changes in firing rate (Hubel and Wiesel, 1962). Their activity typically peaks at a preferred orientation and decays to zero at the orientations that are orthogonal to the preferred. This activity pattern is re-scaled by contrast but its shape is preserved, a phenomenon known as contrast invariance. Contrast-invariant selectivity is also observed at the population level in V1 (Carandini and Sengpiel, 2004). The mechanisms supporting the emergence of contrast-invariance at the population level remain unclear. How does the activity of different neurons with diverse orientation selectivity and non-linear contrast sensitivity combine to give rise to contrast-invariant population selectivity? Theoretical studies have shown that in the balance limit, the properties of single-neurons do not determine the population activity (van Vreeswijk and Sompolinsky, 1996). Instead, the synaptic dynamics (Mongillo et al., 2012) as well as the intracortical connectivity (Rosenbaum and Doiron, 2014) shape the population activity in balanced networks. We report that short-term plasticity can change the synaptic strength between neurons as a function of the presynaptic activity, which in turns modifies the population response to a stimulus. Thus, the same circuit can process a stimulus in different ways –linearly, sublinearly, supralinearly – depending on the properties of the synapses. We found that balanced networks with excitatory to excitatory short-term synaptic plasticity cannot be contrast-invariant. Instead, short-term plasticity modifies the network selectivity such that the tuning curves are narrower (broader) for increasing contrast if synapses are facilitating (depressing). Based on these results, we wondered whether balanced networks with plastic synapses (other than short-term) can support the emergence of contrast-invariant selectivity. Mathematically, we found that the only synaptic transformation that supports perfect contrast invariance in balanced networks is a power-law release of neurotransmitter as a function of the presynaptic firing rate (in the excitatory to excitatory and in the excitatory to inhibitory neurons). We validate this finding using spiking network simulations, where we report contrast-invariant tuning curves when synapses release the neurotransmitter following a power- law function of the presynaptic firing rate. In summary, we show that synaptic plasticity controls the type of non-linear network response to stimulus contrast and that it can be a potential mechanism mediating the emergence of contrast invariance in balanced networks with orientation-dependent connectivity. Our results therefore connect the physiology of individual synapses to the network level and may help understand the establishment of contrast-invariant selectivity.

SeminarNeuroscienceRecording

Natural visual stimuli for mice

Thomas Euler
University of Tubingen
Jul 17, 2020

During the course of evolution, a species’ environment shapes its sensory abilities, as individuals with more optimized sensory abilities are more likely survive and procreate. Adaptations to the statistics of the natural environment can be observed along the early visual pathway and across species. Therefore, characterising the properties of natural environments and studying the representation of natural scenes along the visual pathway is crucial for advancing our understanding of the structure and function of the visual system. In the past 20 years, mice have become an important model in vision research, but the fact that they live in a different environment than primates and have different visual needs is rarely considered. One particular challenge for characterising the mouse’s visual environment is that they are dichromats with photoreceptors that detect UV light, which the typical camera does not record. This also has consequences for experimental visual stimulation, as the blue channel of computer screens fails to excite mouse UV cone photoreceptors. In my talk, I will describe our approach to recording “colour” footage of the habitat of mice – from the mouse’s perspective – and to studying retinal circuits in the ex vivo retina with natural movies.

SeminarNeuroscienceRecording

Natural stimulus encoding in the retina with linear and nonlinear receptive fields

Tim Gollisch
University of Goettingen
May 20, 2020

Popular notions of how the retina encodes visual stimuli typically focus on the center-surround receptive fields of retinal ganglion cells, the output neurons of the retina. In this view, the receptive field acts as a linear filter on the visual stimulus, highlighting spatial contrast and providing efficient representations of natural images. Yet, we also know that many ganglion cells respond vigorously to fine spatial gratings that should not activate the linear filter of the receptive field. Thus, ganglion cells may integrate visual signals nonlinearly across space. In this talk, I will discuss how these (and other) nonlinearities relate to the encoding of natural visual stimuli in the retina. Based on electrophysiological recordings of ganglion and bipolar cells from mouse and salamander retina, I will present methods for assessing nonlinear processing in different cell types and examine their importance and potential function under natural stimulation.

SeminarNeuroscienceRecording

Playing the piano with the cortex: role of neuronal ensembles and pattern completion in perception

Rafael Yuste
Columbia University
May 12, 2020

The design of neural circuits, with large numbers of neurons interconnected in vast networks, strongly suggest that they are specifically build to generate emergent functional properties (1). To explore this hypothesis, we have developed two-photon holographic methods to selective image and manipulate the activity of neuronal populations in 3D in vivo (2). Using them we find that groups of synchronous neurons (neuronal ensembles) dominate the evoked and spontaneous activity of mouse primary visual cortex (3). Ensembles can be optogenetically imprinted for several days and some of their neurons trigger the entire ensemble (4). By activating these pattern completion cells in ensembles involved in visual discrimination paradigms, we can bi-directionally alter behavioural choices (5). Our results demonstrate that ensembles are necessary and sufficient for visual perception and are consistent with the possibility that neuronal ensembles are the functional building blocks of cortical circuits. 1. R. Yuste, From the neuron doctrine to neural networks. Nat Rev Neurosci 16, 487-497 (2015). 2. L. Carrillo-Reid, W. Yang, J. E. Kang Miller, D. S. Peterka, R. Yuste, Imaging and Optically Manipulating Neuronal Ensembles. Annu Rev Biophys, 46: 271-293 (2017). 3. J. E. Miller, I. Ayzenshtat, L. Carrillo-Reid, R. Yuste, Visual stimuli recruit intrinsically generated cortical ensembles. Proceedings of the National Academy of Sciences of the United States of America 111, E4053-4061 (2014). 4. L. Carrillo-Reid, W. Yang, Y. Bando, D. S. Peterka, R. Yuste, Imprinting and recalling cortical ensembles. Science 353, 691-694 (2016). 5. L. Carrillo-Reid, S. Han, W. Yang, A. Akrouh, R. Yuste, (2019). Controlling visually-guided behaviour by holographic recalling of cortical ensembles. Cell 178, 447-457. DOI:https://doi.org/10.1016/j.cell.2019.05.045.

SeminarNeuroscience

A paradoxical kind of sleep In Drosophila melanogaster

Bruno van Swinderen
University of Queensland
Apr 30, 2020

The dynamic nature of sleep in most animals suggests distinct stages which serve different functions. Genetic sleep induction methods in animal models provide a powerful way to disambiguate these stages and functions, although behavioural methods alone are insufficient to accurately identify what kind of sleep is being engaged. In Drosophila, activation of the dorsal fan-shaped body (dFB) promotes sleep, but it remains unclear what kind of sleep this is, how the rest of the fly brain is behaving, or if any specific sleep functions are being achieved. Here, we developed a method to record calcium activity from thousands of neurons across a volume of the fly brain during dFB-induced sleep, and we compared this to the effects of a sleep-promoting drug. We found that drug-induced spontaneous sleep decreased brain activity and connectivity, whereas dFB sleep was not different from wakefulness. Paradoxically, dFB-induced sleep was found to be even deeper than drug- induced sleep. When we probed the sleeping fly brain with salient visual stimuli, we found that the activity of visually-responsive neurons was blocked by dFB activation, confirming a disconnect from the external environment. Prolonged optogenetic dFB activation nevertheless achieved a significant sleep function, by correcting visual attention defects brought on by sleep deprivation. These results suggest that dFB activation promotes a distinct form of sleep in Drosophila, where brain activity and connectivity remain similar to wakefulness, but responsiveness to external sensory stimuli is profoundly suppressed.

ePosterNeuroscience

Comparing motor and auditory predictive signals of upcoming visual stimuli

Batel Buaron, Roy Mukamel

FENS Forum 2024

ePosterNeuroscience

Comparison of acetylcholine release in the mouse cerebral cortex in response to standard visual stimuli vs dynamic virtual reality environment

Julie Azrak, Hossein Sedighi, Jose Daniel Tirado Ramirez, Yulong Li, Elvire Vaucher

FENS Forum 2024

ePosterNeuroscience

Dynamic and state-dependent switching of behaviour in response to competing visual stimuli in Drosophila

Roshan Kumar Satapathy, Maximilian Joesch

FENS Forum 2024

ePosterNeuroscience

Temporal integration of audio-visual stimuli in the mouse superior colliculus

Gaia Bianchini, Xavier Cano Ferrer, George Konstantinou, Maria Florencia Iacaruso

FENS Forum 2024

visual stimuli coverage

28 items

Seminar23
ePoster4
Position1
Domain spotlight

Explore how visual stimuli research is advancing inside Neuro.

Visit domain