ePoster

Interpretable “component-encoding” models for multi-experiment integration

David Skrill, Samuel Norman-Haignere
COSYNE 2025(2025)
Montreal, Canada

Conference

COSYNE 2025

Montreal, Canada

Resources

Authors & Affiliations

David Skrill, Samuel Norman-Haignere

Abstract

A central goal of sensory neuroscience is to build parsimonious computational models that can both predict neural responses to natural stimuli and reveal interpretable functional organization in the brain. Statistical “component” models can learn interpretable, low-dimensional structure across different brain regions and subjects, but lack an explicit “encoding model” that links these components to the stimuli that drive them, and thus cannot generate predictions for new stimuli or generalize across different experiments. The predictive power of sensory encoding models has improved substantially with advances in deep neural network (DNN) modeling, but producing simple and generalizable insights from these models is challenging. To overcome these limitations, we develop "component-encoding models" (CEMs) which approximate neural responses as a weighted sum of a small number of component response dimensions, each approximated by an encoding model. We show in simulations and fMRI data that our CEM framework can infer a small number of interpretable response dimensions across different experiments with non-overlapping stimuli and subjects (unlike standard components) while maintaining or improving the prediction accuracy of standard encoding models.

Unique ID: cosyne-25/interpretable-component-encoding-61a55a12