ePoster
Distinct neural patterns during categorization learning reflect a switch between strategies
Rebekka Heinenand 3 co-authors
FENS Forum 2024 (2024)
Messe Wien Exhibition & Congress Center, Vienna, Austria
Presentation
Date TBA
Event Information
Poster
View posterAbstract
When we look at a dolphin we see grey skin, a long body, and fins – visual features that are typical for fish. Yet we know that dolphins are mammals. How does the brain learn these exceptions? To elucidate this question, we conducted a functional magnetic resonance imaging study (fMRI) using a categorization learning task. Across five blocks, participants learned to sort stimuli into two classes depending on their color compositions while undergoing 3 Tesla fMRI scanning. In the two sets, we also presented stimuli that shared most of their features with the respective other group (exceptions). We computed two learning models, a prototype model and an exemplar model, and calculated their fit to learning behavior. We found that starting from block three, the exemplar model outperformed the prototype strategy, putatively because of the exceptions. Next, we abstracted participant-specific stimulus similarities (similarity to prototypes/similarity to all exemplars) from the two models. Using a whole-brain searchlight approach with representational similarity analysis (RSA), we tested if model similarities matched the respective neural similarity patterns. Interestingly, we found that the two models matched representational patterns in two distinct brain regions: For the prototype model, we observed three frontal and temporal clusters (posterior cingulate, inferior frontal gyrus, middle temporal gyrus), while the exemplar model matched neural similarity patterns in the visual cortex (lateral occipital cortex, precuneus, lingual gyrus). Our results suggest that processing of exemplars depends on an employment of visual areas, while abstract prototype representations depended on frontotemporal regions.