← Back

Regularities

Topic spotlight
TopicWorld Wide

regularities

Discover seminars, jobs, and research tagged with regularities across World Wide.
12 curated items12 Seminars
Updated over 1 year ago
12 items · regularities
12 results
SeminarNeuroscience

Learning representations of specifics and generalities over time

Anna Schapiro
University of Pennsylvania
Apr 11, 2024

There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.

SeminarNeuroscienceRecording

Sampling the environment with body-brain rhythms

Antonio Criscuolo
Maastricht University
Jan 24, 2023

Since Darwin, comparative research has shown that most animals share basic timing capacities, such as the ability to process temporal regularities and produce rhythmic behaviors. What seems to be more exclusive, however, are the capacities to generate temporal predictions and to display anticipatory behavior at salient time points. These abilities are associated with subcortical structures like basal ganglia (BG) and cerebellum (CE), which are more developed in humans as compared to nonhuman animals. In the first research line, we investigated the basic capacities to extract temporal regularities from the acoustic environment and produce temporal predictions. We did so by adopting a comparative and translational approach, thus making use of a unique EEG dataset including 2 macaque monkeys, 20 healthy young, 11 healthy old participants and 22 stroke patients, 11 with focal lesions in the BG and 11 in the CE. In the second research line, we holistically explore the functional relevance of body-brain physiological interactions in human behavior. Thus, a series of planned studies investigate the functional mechanisms by which body signals (e.g., respiratory and cardiac rhythms) interact with and modulate neurocognitive functions from rest and sleep states to action and perception. This project supports the effort towards individual profiling: are individuals’ timing capacities (e.g., rhythm perception and production), and general behavior (e.g., individual walking and speaking rates) influenced / shaped by body-brain interactions?

SeminarNeuroscienceRecording

Motor contribution to auditory temporal predictions

Benjamin Morillon
Aix Marseille Univ, Inserm, INS, Institut de Neurosciences des Systèmes
Dec 13, 2022

Temporal predictions are fundamental instruments for facilitating sensory selection, allowing humans to exploit regularities in the world. Recent evidence indicates that the motor system instantiates predictive timing mechanisms, helping to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Accordingly, in the auditory domain auditory-motor interactions are observed during perception of speech and music, two temporally structured sensory streams. I will present a behavioral and neurophysiological account for this theory and will detail the parameters governing the emergence of this auditory-motor coupling, through a set of behavioral and magnetoencephalography (MEG) experiments.

SeminarNeuroscience

Synthetic and natural images unlock the power of recurrency in primary visual cortex

Andreea Lazar
Ernst Strüngmann Institute (ESI) for Neuroscience
May 19, 2022

During perception the visual system integrates current sensory evidence with previously acquired knowledge of the visual world. Presumably this computation relies on internal recurrent interactions. We record populations of neurons from the primary visual cortex of cats and macaque monkeys and find evidence for adaptive internal responses to structured stimulation that change on both slow and fast timescales. In the first experiment, we present abstract images, only briefly, a protocol known to produce strong and persistent recurrent responses in the primary visual cortex. We show that repetitive presentations of a large randomized set of images leads to enhanced stimulus encoding on a timescale of minutes to hours. The enhanced encoding preserves the representational details required for image reconstruction and can be detected in post-exposure spontaneous activity. In a second experiment, we show that the encoding of natural scenes across populations of V1 neurons is improved, over a timescale of hundreds of milliseconds, with the allocation of spatial attention. Given the hierarchical organization of the visual cortex, contextual information from the higher levels of the processing hierarchy, reflecting high-level image regularities, can inform the activity in V1 through feedback. We hypothesize that these fast attentional boosts in stimulus encoding rely on recurrent computations that capitalize on the presence of high-level visual features in natural scenes. We design control images dominated by low-level features and show that, in agreement with our hypothesis, the attentional benefits in stimulus encoding vanish. We conclude that, in the visual system, powerful recurrent processes optimize neuronal responses, already at the earliest stages of cortical processing.

SeminarNeuroscience

From natural scene statistics to multisensory integration: experiments, models and applications

Cesare Parise
Oculus VR
Feb 8, 2022

To efficiently process sensory information, the brain relies on statistical regularities in the input. While generally improving the reliability of sensory estimates, this strategy also induces perceptual illusions that help reveal the underlying computational principles. Focusing on auditory and visual perception, in my talk I will describe how the brain exploits statistical regularities within and across the senses for the perception space, time and multisensory integration. In particular, I will show how results from a series of psychophysical experiments can be interpreted in the light of Bayesian Decision Theory, and I will demonstrate how such canonical computations can be implemented into simple and biologically plausible neural circuits. Finally, I will show how such principles of sensory information processing can be leveraged in virtual and augmented reality to overcome display limitations and expand human perception.

SeminarNeuroscience

Heartbeat-based auditory regularities induce prediction in human wakefulness and sleep

Marzia de Lucia
Laboratoire de Recherche en Neuroimagerie (LREN), University Hospital (CHUV) and University of Lausanne (UNIL)
Feb 7, 2022

Exposure to sensory regularities in the environment induces the human brain to form expectations about incoming stimuli and remains partially preserved in the absence of consciousness (i.e. coma and sleep). While regularity often refers to stimuli presented at a fixed pace, we recently explored whether auditory prediction extends to pseudo-regular sequences where sensory prediction is induced by locking sound onsets to heartbeat signals and whether it can occur across vigilance states. In a series of experiments in healthy volunteers, we found neural and cardiac evidence of auditory prediction during heartbeat-based auditory regularities in wakefulness and N2 sleep. This process could represent an important mechanism for detecting unexpected stimuli in the environment even in states of limited conscious and attentional resources.

SeminarNeuroscience

A novel form of retinotopy in area V2 highlights location-dependent feature selectivity in the visual system

Madineh Sedigh-Sarvestani
Max Planck Florida Institute for Neuroscience
Jan 18, 2022

Topographic maps are a prominent feature of brain organization, reflecting local and large-scale representation of the sensory surface. ​​Traditionally, such representations in early visual areas are conceived as retinotopic maps preserving ego-centric retinal spatial location while ensuring that other features of visual input are uniformly represented for every location in space. I will discuss our recent findings of a striking departure from this simple mapping in the secondary visual area (V2) of the tree shrew that is best described as a sinusoidal transformation of the visual field. This sinusoidal topography is ideal for achieving uniform coverage in an elongated area like V2 as predicted by mathematical models designed for wiring minimization, and provides a novel explanation for stripe-like patterns of intra-cortical connections and functional response properties in V2. Our findings suggest that cortical circuits flexibly implement solutions to sensory surface representation, with dramatic consequences for large-scale cortical organization. Furthermore our work challenges the framework of relatively independent encoding of location and features in the visual system, showing instead location-dependent feature sensitivity produced by specialized processing of different features in different spatial locations. In the second part of the talk, I will propose that location-dependent feature sensitivity is a fundamental organizing principle of the visual system that achieves efficient representation of positional regularities in visual input, and reflects the evolutionary selection of sensory and motor circuits to optimally represent behaviorally relevant information. The relevant papers can be found here: V2 retinotopy (Sedigh-Sarvestani et al. Neuron, 2021) Location-dependent feature sensitivity (Sedigh-Sarvestani et al. Under Review, 2022)

SeminarNeuroscienceRecording

NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks

Athena Akrami
University College London
Dec 1, 2021

The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.

SeminarNeuroscienceRecording

Neural dynamics of probabilistic information processing in humans and recurrent neural networks

Nuttida Rungratsameetaweemana
Sejnowski lab, The Salk Institute
Oct 5, 2021

In nature, sensory inputs are often highly structured, and statistical regularities of these signals can be extracted to form expectation about future sensorimotor associations, thereby optimizing behavior. One of the fundamental questions in neuroscience concerns the neural computations that underlie these probabilistic sensorimotor processing. Through a recurrent neural network (RNN) model and human psychophysics and electroencephalography (EEG), the present study investigates circuit mechanisms for processing probabilistic structures of sensory signals to guide behavior. We first constructed and trained a biophysically constrained RNN model to perform a series of probabilistic decision-making tasks similar to paradigms designed for humans. Specifically, the training environment was probabilistic such that one stimulus was more probable than the others. We show that both humans and the RNN model successfully extract information about stimulus probability and integrate this knowledge into their decisions and task strategy in a new environment. Specifically, performance of both humans and the RNN model varied with the degree to which the stimulus probability of the new environment matched the formed expectation. In both cases, this expectation effect was more prominent when the strength of sensory evidence was low, suggesting that like humans, our RNNs placed more emphasis on prior expectation (top-down signals) when the available sensory information (bottom-up signals) was limited, thereby optimizing task performance. Finally, by dissecting the trained RNN model, we demonstrate how competitive inhibition and recurrent excitation form the basis for neural circuitry optimized to perform probabilistic information processing.

SeminarNeuroscienceRecording

The role of the primate prefrontal cortex in inferring the state of the world and predicting change

Ramon Bartolo
Averbeck lab, Nation Institute of Mental Health
Sep 7, 2021

In an ever-changing environment, uncertainty is omnipresent. To deal with this, organisms have evolved mechanisms that allow them to take advantage of environmental regularities in order to make decisions robustly and adjust their behavior efficiently, thus maximizing their chances of survival. In this talk, I will present behavioral evidence that animals perform model-based state inference to predict environmental state changes and adjust their behavior rapidly, rather than slowly updating choice values. This model-based inference process can be described using Bayesian change-point models. Furthermore, I will show that neural populations in the prefrontal cortex accurately predict behavioral switches, and that the activity of these populations is associated with Bayesian estimates. In addition, we will see that learning leads to the emergence of a high-dimensional representational subspace that can be reused when the animals re-learn a previously learned set of action-value associations. Altogether, these findings highlight the role of the PFC in representing a belief about the current state of the world.

SeminarNeuroscienceRecording

Understanding the visual demands of underwater habitats for aquatic animals used in neuroscience research

Tod Thiele and Dr. Emily Cooper
Tod Thiele: University of Toronto Scarborough; Emily Cooper: University of California, Berkeley
Jul 9, 2020

Zebrafish and cichlids are popular models in visual neuroscience, due to their amenability to advanced research tools and their diverse set of visually guided behaviours. It is often asserted that animals’ neural systems are adapted to the statistical regularities in their natural environments, but relatively little is known about the visual spatiotemporal features in the underwater habitats that nurtured these fish. To address this gap, we have embarked on an examination of underwater habitats in northeastern India and Lake Tanganyika (Zambia), where zebrafish and cichlids are native. In this talk, we will describe the methods used to conduct a series of field measurements and generate a large and diverse dataset of these underwater habitats. We will present preliminary results suggesting that the demands for visually-guided navigation differ between these underwater habitats and the terrestrial habitats characteristic of other model species.

SeminarNeuroscienceRecording

Learning from the infant’s point of view

Linda Smith
Indiana University
Jul 7, 2020

Learning depends on both the learning mechanism and the regularities in the training material, yet most research on human and machine learning focus on the discovering the mechanisms that underlie powerful learning. I will present evidence from our research focusing on the statistical structure of infant visual learning environments. The findings suggest that the statistical structure of those learning environments are not like those used in laboratory experiments on visual learning, in machine learning, or in our adult assumptions about how teach visual categories. The data derive from our use of head cameras and head-mounted eye trackers capturing FOV experiences in the home as well as in simulated home environments in the laboratory. The participants range from 1 month of age to 24 months. The observed statistical structure offers new insights into the developmental foundations of visual object recognition and suggest a computational rethinking of the problem of visual category formation. The observed environmental statistics also have direct implications for understanding the development of cortical visual systems.