← Back

Consistency

Topic spotlight
TopicWorld Wide

consistency

Discover seminars, jobs, and research tagged with consistency across World Wide.
14 curated items14 Seminars
Updated over 1 year ago
14 items · consistency
14 results
SeminarPsychology

Error Consistency between Humans and Machines as a function of presentation duration

Thomas Klein
Eberhard Karls Universität Tübingen
Jun 30, 2024

Within the last decade, Deep Artificial Neural Networks (DNNs) have emerged as powerful computer vision systems that match or exceed human performance on many benchmark tasks such as image classification. But whether current DNNs are suitable computational models of the human visual system remains an open question: While DNNs have proven to be capable of predicting neural activations in primate visual cortex, psychophysical experiments have shown behavioral differences between DNNs and human subjects, as quantified by error consistency. Error consistency is typically measured by briefly presenting natural or corrupted images to human subjects and asking them to perform an n-way classification task under time pressure. But for how long should stimuli ideally be presented to guarantee a fair comparison with DNNs? Here we investigate the influence of presentation time on error consistency, to test the hypothesis that higher-level processing drives behavioral differences. We systematically vary presentation times of backward-masked stimuli from 8.3ms to 266ms and measure human performance and reaction times on natural, lowpass-filtered and noisy images. Our experiment constitutes a fine-grained analysis of human image classification under both image corruptions and time pressure, showing that even drastically time-constrained humans who are exposed to the stimuli for only two frames, i.e. 16.6ms, can still solve our 8-way classification task with success rates way above chance. We also find that human-to-human error consistency is already stable at 16.6ms.

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Reese Kneeland
Jan 4, 2024

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705

SeminarNeuroscience

Why are we consistently inconsistent? On the neural mechanisms of behavioural inconsistency

Tobias Hauser
Developmental Computational Psychiatry Lab, University of Tübingen
May 3, 2023
SeminarOpen SourceRecording

GeNN

James Knight
University of Sussex
Mar 22, 2022

Large-scale numerical simulations of brain circuit models are important for identifying hypotheses on brain functions and testing their consistency and plausibility. Similarly, spiking neural networks are also gaining traction in machine learning with the promise that neuromorphic hardware will eventually make them much more energy efficient than classical ANNs. In this session, we will present the GeNN (GPU-enhanced Neuronal Networks) framework, which aims to facilitate the use of graphics accelerators for computational models of large-scale spiking neuronal networks to address the challenge of efficient simulations. GeNN is an open source library that generates code to accelerate the execution of network simulations on NVIDIA GPUs through a flexible and extensible interface, which does not require in-depth technical knowledge from the users. GeNN was originally developed as a pure C++ and CUDA library but, subsequently, we have added a Python interface and OpenCL backend. We will briefly cover the history and basic philosophy of GeNN and show some simple examples of how it is used and how it interacts with other Open Source frameworks such as Brian2GeNN and PyNN.

SeminarPsychology

Commonly used face cognition tests yield low reliability and inconsistent performance: Implications for test design, analysis, and interpretation of individual differences data

Anna Bobak & Alex Jones
University of Stirling & Swansea University
Jan 19, 2022

Unfamiliar face processing (face cognition) ability varies considerably in the general population. However, the means of its assessment are not standardised, and selected laboratory tests vary between studies. It is also unclear whether 1) the most commonly employed tests are reliable, 2) participants show a degree of consistency in their performance, 3) and the face cognition tests broadly measure one underlying ability, akin to general intelligence. In this study, we asked participants to perform eight tests frequently employed in the individual differences literature. We examined the reliability of these tests, relationships between them, consistency in participants’ performance, and used data driven approaches to determine factors underpinning performance. Overall, our findings suggest that the reliability of these tests is poor to moderate, the correlations between them are weak, the consistency in participant performance across tasks is low and that performance can be broadly split into two factors: telling faces together, and telling faces apart. We recommend that future studies adjust analyses to account for stimuli (face images) and participants as random factors, routinely assess reliability, and that newly developed tests of face cognition are examined in the context of convergent validity with other commonly used measures of face cognition ability.

SeminarNeuroscienceRecording

NMC4 Short Talk: Neurocomputational mechanisms of causal inference during multisensory processing in the macaque brain

Guangyao Qi
Institute of Neuroscience, Chinese Academy of Sciences
Dec 2, 2021

Natural perception relies inherently on inferring causal structure in the environment. However, the neural mechanisms and functional circuits that are essential for representing and updating the hidden causal structure during multisensory processing are unknown. To address this, monkeys were trained to infer the probability of a potential common source from visual and proprioceptive signals on the basis of their spatial disparity in a virtual reality system. The proprioceptive drift reported by monkeys demonstrated that they combined historical information and current multisensory signals to estimate the hidden common source and subsequently updated both the causal structure and sensory representation. Single-unit recordings in premotor and parietal cortices revealed that neural activity in premotor cortex represents the core computation of causal inference, characterizing the estimation and update of the likelihood of integrating multiple sensory inputs at a trial-by-trial level. In response to signals from premotor cortex, neural activity in parietal cortex also represents the causal structure and further dynamically updates the sensory representation to maintain consistency with the causal inference structure. Thus, our results indicate how premotor cortex integrates historical information and sensory inputs to infer hidden variables and selectively updates sensory representations in parietal cortex to support behavior. This dynamic loop of frontal-parietal interactions in the causal inference framework may provide the neural mechanism to answer long-standing questions regarding how neural circuits represent hidden structures for body-awareness and agency.

SeminarPsychology

Consistency of Face Identity Processing: Basic & Translational Research

Jeffrey Nador
University of Fribourg
Nov 17, 2021

Previous work looking at individual differences in face identity processing (FIP) has found that most commonly used lab-based performance assessments are unfortunately not sufficiently sensitive on their own for measuring performance in both the upper and lower tails of the general population simultaneously. So more recently, researchers have begun incorporating multiple testing procedures into their assessments. Still, though, the growing consensus seems to be that at the individual level, there is quite a bit of variability between test scores. The overall consequence of this is that extreme scores will still occur simply by chance in large enough samples. To mitigate this issue, our recent work has developed measures of intra-individual FIP consistency to refine selection of those with superior abilities (i.e. from the upper tail). For starters, we assessed consistency of face matching and recognition in neurotypical controls, and compared them to a sample of SRs. In terms of face matching, we demonstrated psychophysically that SRs show significantly greater consistency than controls in exploiting spatial frequency information than controls. Meanwhile, we showed that SRs’ recognition of faces is highly related to memorability for identities, yet effectively unrelated among controls. So overall, at the high end of the FIP spectrum, consistency can be a useful tool for revealing both qualitative and quantitative individual differences. Finally, in conjunction with collaborators from the Rheinland-Pfalz Police, we developed a pair of bespoke work samples to get bias-free measures of intraindividual consistency in current law enforcement personnel. Officers with higher composite scores on a set of 3 challenging FIP tests tended to show higher consistency, and vice versa. Overall, this suggests that not only is consistency a reasonably good marker of superior FIP abilities, but could present important practical benefits for personnel selection in many other domains of expertise.

SeminarNeuroscience

Synaptic plasticity controls the emergence of population-wide invariant representations in balanced network models

Tatjana Tchumatcheko
University of Bonn
Nov 9, 2021

The intensity and features of sensory stimuli are encoded in the activity of neurons in the cortex. In the visual and piriform cortices, the stimulus intensity re-scales the activity of the population without changing its selectivity for the stimulus features. The cortical representation of the stimulus is therefore intensity-invariant. This emergence of network invariant representations appears robust to local changes in synaptic strength induced by synaptic plasticity, even though: i) synaptic plasticity can potentiate or depress connections between neurons in a feature-dependent manner, and ii) in networks with balanced excitation and inhibition, synaptic plasticity determines the non-linear network behavior. In this study, we investigate the consistency of invariant representations with a variety of synaptic states in balanced networks. By using mean-field models and spiking network simulations, we show how the synaptic state controls the emergence of intensity-invariant or intensity-dependent selectivity by inducing changes in the network response to intensity. In particular, we demonstrate how facilitating synaptic states can sharpen the network selectivity while depressing states broaden it. We also show how power-law-type synapses permit the emergence of invariant network selectivity and how this plasticity can be generated by a mix of different plasticity rules. Our results explain how the physiology of individual synapses is linked to the emergence of invariant representations of sensory stimuli at the network level.

SeminarNeuroscience

Memorability: Prioritizing visual information for memory

Wilma Bainbridge
University of Chicago
Jun 27, 2021

There is a surprising consistency in the images we remember and forget – across observers, certain images are intrinsically more memorable than others in spite of our diverse individual experiences. The perception of images at different memorability levels also results in stereotyped patterns in visual and mnemonic regions in the brain, regardless of an individual’s actual memory for that item. In this talk, Dr. Bainbridge will discuss our current neuroscientific understanding of how memorability is represented in patterns in the brain, potentially serving as a signal for how stimulus information is prioritized for eventual memory encoding.

SeminarNeuroscienceRecording

Blindspot: Hidden Biases of Good People

Mahzarin Banaji
Harvard University
Apr 15, 2021

Mahzarin Banaji and her colleague coined the term “implicit bias” in the mid-1990s to refer to behavior that occurs without conscious awareness. Today, Professor Banaji is Cabot Professor of Social Ethics in the Department of Psychology at Harvard University, a member of the American Academy of Arts and Sciences, the National Academy of Sciences and has received numerous awards for her scientific contributions. The purpose of the seminar, Blindspot: Hidden Biases of Good People, is to reveal the surprising and even perplexing ways in which we make errors in assessing and evaluating others when we recruit and hire, onboard and promote, lead teams, undertake succession planning, and work on behalf of our clients or the public we serve. It is Professor Banaji’s belief that people intend well and that the inconsistency we see, between values and behavior, comes from a lack of awareness. But because implicit bias is pervasive, we must rely on scientific evidence to “outsmart” our minds. If we do so, we will be more likely to reach the life goals we have chosen for ourselves and to serve better the organizations for which we work.

SeminarPsychology

Accuracy versus consistency: Investigating face and voice matching abilities

Robin Kramer
University of Lincoln
Mar 17, 2021

Deciding whether two different face photographs or voice samples are from the same person represent fundamental challenges within applied settings. To date, most research has focussed on average performance in these tests, failing to consider individual differences and within-person consistency in responses. In the current studies, participants completed the same face or voice matching test on two separate occasions, allowing comparison of overall accuracy across the two timepoints as well as consistency in trial-level responses. In both experiments, participants were highly consistent in their performances. In addition, we demonstrated a large association between consistency and accuracy, with the most accurate participants also tending to be the most consistent. This is an important result for applied settings in which organisational groups of super-matchers are deployed in real-world contexts. Being able to reliably identify these high performers based upon only a single test informs regarding recruitment for law enforcement agencies worldwide.

SeminarNeuroscienceRecording

Neural circuit redundancy, stability, and variability in developmental brain disorders

Cian O'Donnell
University of Bristol
Jun 17, 2020

Despite the consistency of symptoms at the cognitive level, we now know that brain disorders like Autism and Schizophrenia can each arise from mutations in >100 different genes. Presumably there is a convergence of “symptoms” at the level of neural circuits in diagnosed individuals. In this talk I will argue that redundancy in neural circuit parameters implies that we should take a circuit-function rather that circuit-component approach to understanding these disorders. Then I will present our recent empirical work testing a circuit-function theory for Autism: the idea that neural circuits show excess trial-to-trial variability in response to sensory stimuli, and instability in the representations across a timescale of days. For this we analysed in vivo neural population activity data recorded from somatosensory cortex of mouse models of Fragile-X syndrome, a disorder related to autism. Work with Beatriz Mizusaki (Univ of Bristol), Nazim Kourdougli, Anand Suresh, and Carlos Portera-Cailliau (Univ of California, Los Angeles).

SeminarNeuroscienceRecording

Multilevel Causal Modeling

Moritz Grosse-Wentrup
University of Vienna
Jun 3, 2020

Complex systems can be modeled at various levels of granularity, e.g., we can model a person at the cognitive level, on the neuronal level, or down to the biochemical level. When multiple models represent the same system at different scales, we would like to be able to reason about the causal effects of interventions on each level in such a way that the models remain consistent across levels. In the first part of this talk, I consider which conditions must be fulfilled for two structural equation models (SEMs) to stand in such a causally consistent relation. In the second part of the talk, I present recent work on learning causally consistent SEMs across multiple levels, distinguishing between bottom-up (micro- to macro-level) and top-down (macro- to micro-level) approaches.

SeminarNeuroscienceRecording

Neural manifolds for the stable control of movement

Sara Solla
Northwestern University
Apr 28, 2020

Animals perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of population-wide activity patterns: neural modes that capture the dominant co-variation patterns of population activity and define a task specific low dimensional neural manifold. The time-dependent activation of the neural modes results in latent dynamics. We hypothesize that the latent dynamics associated with the consistent execution of a behaviour need to remain stable, and use an alignment method to establish this stability. Once identified, stable latent dynamics allow for the prediction of various behavioural features via fixed decoder models. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behaviour.