generalizability
Latest
Use case determines the validity of neural systems comparisons
Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties
The organization of neural representations for control
Cognitive control allows us to think and behave flexibly based on our context and goals. Most theories of cognitive control propose a control representation that enables the same input to produce different outputs contingent on contextual factors. In this talk, I will focus on an important property of the control representation's neural code: its representational dimensionality. Dimensionality of a neural representation balances a basic separability/generalizability trade-off in neural computation. This tradeoff has important implications for cognitive control. In this talk, I will present initial evidence from fMRI and EEG showing that task representations in the human brain leverage both ends of this tradeoff during flexible behavior.
Timing errors and decision making
Error monitoring refers to the ability to monitor one's own task performance without explicit feedback. This ability is studied typically in two-alternative forced-choice (2AFC) paradigms. Recent research showed that humans can also keep track of the magnitude and direction of errors in different magnitude domains (e.g., numerosity, duration, length). Based on the evidence that suggests a shared mechanism for magnitude representations, we aimed to investigate whether metric error monitoring ability is commonly governed across different magnitude domains. Participants reproduced/estimated temporal, numerical, and spatial magnitudes after which they rated their confidence regarding first order task performance and judged the direction of their reproduction/estimation errors. Participants were also tested in a 2AFC perceptual decision task and provided confidence ratings regarding their decisions. Results showed that variability in reproductions/estimations and metric error monitoring ability, as measured by combining confidence and error direction judgements, were positively related across temporal, spatial, and numerical domains. Metacognitive sensitivity in these metric domains was also positively associated with each other but not with metacognitive sensitivity in the 2AFC perceptual decision task. In conclusion, the current findings point at a general metric error monitoring ability that is shared across different metric domains with limited generalizability to perceptual decision-making.
Inclusive Human Participant Research
Human participant research is somehow both antithetical and complementary to science. On the one hand, working with human participants provides incredibly rich and complex data with ‘real-world’ ecological validity. On the other, this richness is due to the incredible number of variables which uncontrollably become intertwined with your research interest, potentially limiting the conclusions you can draw from your work. Historical over-representation of white men as research participants, coupled with often overly-stringent exclusion criteria has led to a diversity crisis in human participant research. For our research to be truly inclusive, representative and generalisable to the rest of the population, our data must be collected from diverse individuals. This session will explore common barriers to diversity in studies with human participants, and will provide guidance on how to make sure your own research is accessible and inclusive.
One Instructional Sequence Fits all? A Conceptual Analysis of the Applicability of Concreteness Fading
According to the concreteness fading approach, instruction should start with concrete representations and progress stepwise to representations that are more idealized. Various researchers have suggested that concreteness fading is a broadly applicable instructional approach. In this talk, we conceptually analyze examples of concreteness fading in mathematics and various science domains. In this analysis, we draw on theories of analogical and relational reasoning and on the literature about learning with multiple representations. Furthermore, we report on an experimental study in which we employed concreteness fading in advanced physics education. The results of the conceptual analysis and the experimental study indicate that concreteness fading may not be as generalizable as has been suggested. The reasons for this limited generalizability are twofold. First, the types of representations and the relations between them differ across different domains. Second, the instructional goals between domains and the subsequent roles of the representations vary.
generalizability coverage
5 items