Latest

SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 16, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarNeuroscienceRecording

Timing errors and decision making

Fuat Balci
University of Manitoba
Nov 30, 2021

Error monitoring refers to the ability to monitor one's own task performance without explicit feedback. This ability is studied typically in two-alternative forced-choice (2AFC) paradigms. Recent research showed that humans can also keep track of the magnitude and direction of errors in different magnitude domains (e.g., numerosity, duration, length). Based on the evidence that suggests a shared mechanism for magnitude representations, we aimed to investigate whether metric error monitoring ability is commonly governed across different magnitude domains. Participants reproduced/estimated temporal, numerical, and spatial magnitudes after which they rated their confidence regarding first order task performance and judged the direction of their reproduction/estimation errors. Participants were also tested in a 2AFC perceptual decision task and provided confidence ratings regarding their decisions. Results showed that variability in reproductions/estimations and metric error monitoring ability, as measured by combining confidence and error direction judgements, were positively related across temporal, spatial, and numerical domains. Metacognitive sensitivity in these metric domains was also positively associated with each other but not with metacognitive sensitivity in the 2AFC perceptual decision task. In conclusion, the current findings point at a general metric error monitoring ability that is shared across different metric domains with limited generalizability to perceptual decision-making.

SeminarNeuroscience

Inclusive Human Participant Research

Pollyanna Sheehan, Arnelle Etiennt
University of Bristol, Carnegie Mellon University
Jun 23, 2021

Human participant research is somehow both antithetical and complementary to science. On the one hand, working with human participants provides incredibly rich and complex data with ‘real-world’ ecological validity. On the other, this richness is due to the incredible number of variables which uncontrollably become intertwined with your research interest, potentially limiting the conclusions you can draw from your work. Historical over-representation of white men as research participants, coupled with often overly-stringent exclusion criteria has led to a diversity crisis in human participant research. For our research to be truly inclusive, representative and generalisable to the rest of the population, our data must be collected from diverse individuals. This session will explore common barriers to diversity in studies with human participants, and will provide guidance on how to make sure your own research is accessible and inclusive.

generalizability coverage

5 items

Seminar5
Domain spotlight

Explore how generalizability research is advancing inside Neuro.

Visit domain