← Back

Fidelity

Topic spotlight
TopicWorld Wide

fidelity

Discover seminars, jobs, and research tagged with fidelity across World Wide.
20 curated items19 Seminars1 ePoster
Updated 8 months ago
20 items · fidelity
20 results
SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Paul Scotti
Dec 6, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812

SeminarNeuroscience

Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong

Tim Gentner
University of California, San Diego, USA
Nov 8, 2022

Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space.  Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.

SeminarNeuroscience

Signal in the Noise: models of inter-trial and inter-subject neural variability

Alex Williams
NYU/Flatiron
Nov 3, 2022

The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).

SeminarNeuroscienceRecording

Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation

Aran Nayebi
MIT
Nov 1, 2022

Studies of the mouse visual system have revealed a variety of visual brain areas in a roughly hierarchical arrangement, together with a multitude of behavioral capacities, ranging from stimulus-reward associations, to goal-directed navigation, and object-centric discriminations. However, an overall understanding of the mouse’s visual cortex organization, and how this organization supports visual behaviors, remains unknown. Here, we take a computational approach to help address these questions, providing a high-fidelity quantitative model of mouse visual cortex. By analyzing factors contributing to model fidelity, we identified key principles underlying the organization of mouse visual cortex. Structurally, we find that comparatively low-resolution and shallow structure were both important for model correctness. Functionally, we find that models trained with task-agnostic, unsupervised objective functions, based on the concept of contrastive embeddings were substantially better than models trained with supervised objectives. Finally, the unsupervised objective builds a general-purpose visual representation that enables the system to achieve better transfer on out-of-distribution visual, scene understanding and reward-based navigation tasks. Our results suggest that mouse visual cortex is a low-resolution, shallow network that makes best use of the mouse’s limited resources to create a light-weight, general-purpose visual system – in contrast to the deep, high-resolution, and more task-specific visual system of primates.

SeminarNeuroscienceRecording

Retinal responses to natural inputs

Fred Rieke
University of Washington
Apr 17, 2022

The research in my lab focuses on sensory signal processing, particularly in cases where sensory systems perform at or near the limits imposed by physics. Photon counting in the visual system is a beautiful example. At its peak sensitivity, the performance of the visual system is limited largely by the division of light into discrete photons. This observation has several implications for phototransduction and signal processing in the retina: rod photoreceptors must transduce single photon absorptions with high fidelity, single photon signals in photoreceptors, which are only 0.03 – 0.1 mV, must be reliably transmitted to second-order cells in the retina, and absorption of a single photon by a single rod must produce a noticeable change in the pattern of action potentials sent from the eye to the brain. My approach is to combine quantitative physiological experiments and theory to understand photon counting in terms of basic biophysical mechanisms. Fortunately there is more to visual perception than counting photons. The visual system is very adept at operating over a wide range of light intensities (about 12 orders of magnitude). Over most of this range, vision is mediated by cone photoreceptors. Thus adaptation is paramount to cone vision. Again one would like to understand quantitatively how the biophysical mechanisms involved in phototransduction, synaptic transmission, and neural coding contribute to adaptation.

SeminarNeuroscience

Unravelling bistable perception from human intracranial recordings

Rodica Curtu
UIOWA
Apr 5, 2022

Discovering dynamical patterns from high fidelity timeseries is typically a challenging task. In this talk, the timeseries data consist of neural recordings taken from the auditory cortex of human subjects who listened to sequences of repeated triplets of tones and reported their perception by pressing a button. Subjects reported spontaneous alternations between two auditory perceptual states (1-stream and 2-streams). We discuss a data-driven method, which leverages time-delayed coordinates, diffusion maps, and dynamic mode decomposition, to identify neural features that correlated with subject-reported switching between perceptual states.

SeminarPsychology

Computational Models of Fine-Detail and Categorical Information in Visual Working Memory: Unified or Separable Representations?

Timothy J Ricker
University of South Dakota
Nov 21, 2021

When we remember a stimulus we rarely maintain a full fidelity representation of the observed item. Our working memory instead maintains a mixture of the observed feature values and categorical/gist information. I will discuss evidence from computational models supporting a mix of categorical and fine-detail information in working memory. Having established the need for two memory formats in working memory, I will discuss whether categorical and fine-detailed information for a stimulus are represented separately or as a single unified representation. Computational models of these two potential cognitive structures make differing predictions about the pattern of responses in visual working memory recall tests. The present study required participants to remember the orientation of stimuli for later reproduction. The pattern of responses are used to test the competing representational structures and to quantify the relative amount of fine-detailed and categorical information maintained. The effects of set size, encoding time, serial order, and response order on memory precision, categorical information, and guessing rates are also explored. (This is a 60 min talk).

SeminarNeuroscience

Representation transfer and signal denoising through topographic modularity

Barna Zajzon
Morrison lab, Forschungszentrum Jülich, Germany
Nov 3, 2021

To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.

SeminarNeuroscience

Demystifying the richness of visual perception

Ruth Rosenholtz
MIT
Oct 19, 2021

Human vision is full of puzzles. Observers can grasp the essence of a scene in an instant, yet when probed for details they are at a loss. People have trouble finding their keys, yet they may be quite visible once found. How does one explain this combination of marvelous successes with quirky failures? I will describe our attempts to develop a unifying theory that brings a satisfying order to multiple phenomena. One key is to understand peripheral vision. A visual system cannot process everything with full fidelity, and therefore must lose some information. Peripheral vision must condense a mass of information into a succinct representation that nonetheless carries the information needed for vision at a glance. We have proposed that the visual system deals with limited capacity in part by representing its input in terms of a rich set of local image statistics, where the local regions grow — and the representation becomes less precise — with distance from fixation. This scheme trades off computation of sophisticated image features at the expense of spatial localization of those features. What are the implications of such an encoding scheme? Critical to our understanding has been the use of methodologies for visualizing the equivalence classes of the model. These visualizations allow one to quickly see that many of the puzzles of human vision may arise from a single encoding mechanism. They have suggested new experiments and predicted unexpected phenomena. Furthermore, visualization of the equivalence classes has facilitated the generation of testable model predictions, allowing us to study the effects of this relatively low-level encoding on a wide range of higher-level tasks. Peripheral vision helps explain many of the puzzles of vision, but some remain. By examining the phenomena that cannot be explained by peripheral vision, we gain insight into the nature of additional capacity limits in vision. In particular, I will suggest that decision processes face general-purpose limits on the complexity of the tasks they can perform at a given time.

SeminarPsychology

Categories, language, and visual working memory: how verbal labels change capacity limitations

Alessandra S. Souza
University of Porto, University of Zurich
Aug 10, 2021

The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.

SeminarPhysics of Life

Spatio-temporal control over near critical-point operation ensures fidelity of bacterial genome partition

Jian Liu
Johns Hopkins
Jul 29, 2021
SeminarNeuroscienceRecording

Toward a High-fidelity Artificial Retina for Vision Restoration

E.J. Chichilnisky
Stanford University
Jun 16, 2020

Electronic interfaces to the retina represent an exciting development in science, engineering, and medicine – an opportunity to exploit our knowledge of neural circuitry and function to restore or even enhance vision. However, although existing devices demonstrate proof of principle in treating incurable blindness, they produce limited visual function. Some of the reasons for this can be understood based on the precise and specific neural circuitry that mediates visual signaling in the retina. Consideration of this circuitry suggests that future devices may need to operate at single-cell, single-spike resolution in order to mediate naturalistic visual function. I will show large-scale multi-electrode recording and stimulation data from the primate retina indicating that, in some cases, such resolution is possible. I will also discuss cases in which it fails, and propose that we can improve artificial vision in such conditions by incorporating our knowledge of the visual system in bi-directional devices that adapt to the host neural circuitry. Finally, I will introduce the Stanford Artificial Retina Project, aimed at developing a retinal implant that more faithfully reproduces the neural code of the retina, and briefly discuss the implications for scientific investigation and for other neural interfaces of the future.

SeminarNeuroscienceRecording

The subcellular organization of excitation and inhibition underlying high-fidelity direction coding in the retina

Gautam Awatramani
University of Victoria
May 10, 2020

Understanding how neural circuits in the brain compute information not only requires determining how individual inhibitory and excitatory elements of circuits are wired together, but also a detailed knowledge of their functional interactions. Recent advances in optogenetic techniques and mouse genetics now offer ways to specifically probe the functional properties of neural circuits with unprecedented specificity. Perhaps one of the most heavily interrogated circuits in the mouse brain is one in the retina that is involved in coding direction (reviewed by Mauss et al., 2017; Vaney et al., 2012). In this circuit, direction is encoded by specialized direction-selective (DS) ganglion cells (DSGCs), which respond robustly to objects moving in a ‘preferred’ direction but not in the opposite or ‘null’ direction (Barlow and Levick, 1965). We now know this computation relies on the coordination of three transmitter systems: glutamate, GABA and acetylcholine (ACh). In this talk, I will discuss the synaptic mechanisms that produce the spatiotemporal patterns of inhibition and excitation that are crucial for shaping directional selectivity. Special emphasis will be placed on the role of ACh, as it is unclear whether it is mediated by synaptic or non-synaptic mechanisms, which is in fact a central issue in the CNS. Barlow, H.B., and Levick, W.R. (1965). The mechanism of directionally selective units in rabbit's retina. J Physiol 178, 477-504. Mauss, A.S., Vlasits, A., Borst, A., and Feller, M. (2017). Visual Circuits for Direction Selectivity. Annu Rev Neurosci 40, 211-230. Vaney, D.I., Sivyer, B., and Taylor, W.R. (2012). Direction selectivity in the retina: symmetry and asymmetry in structure and function. Nat Rev Neurosci 13, 194-208

ePoster

A universal power law in visual adaptation: balancing representation fidelity and metabolic cost

Matteo Mariani, Amin S. Moosavi, Dario Ringach, Mario Dipoppa

COSYNE 2025