Semantic Cognition
semantic cognition
Semantic Distance and Beyond: Interacting Predictors of Verbal Analogy Performance
Prior studies of A:B::C:D verbal analogies have identified several factors that affect performance, including the semantic similarity between source and target domains (semantic distance), the semantic association between the C-term and incorrect answers (distracter salience), and the type of relations between word pairs (e.g., categorical, compositional, and causal). However, it is unclear how these stimulus properties affect performance when utilized together. Moreover, how do these item factors interact with individual differences such as crystallized intelligence and creative thinking? Several studies reveal interactions among these item and individual difference factors impacting verbal analogy performance. For example, a three-way interaction demonstrated that the effects of semantic distance and distracter salience had a greater impact on performance for compositional and causal relations than for categorical ones (Jones, Kmiecik, Irwin, & Morrison, 2022). Implications for analogy theories and future directions are discussed.
The neural basis of flexible semantic cognition (BACN Mid-career Prize Lecture 2022)
Semantic cognition brings meaning to our world – it allows us to make sense of what we see and hear, and to produce adaptive thoughts and behaviour. Since we have a wealth of information about any given concept, our store of knowledge is not sufficient for successful semantic cognition; we also need mechanisms that can steer the information that we retrieve so it suits the context or our current goals. This talk traces the neural networks that underpin this flexibility in semantic cognition. It draws on evidence from multiple methods (neuropsychology, neuroimaging, neural stimulation) to show that two interacting heteromodal networks underpin different aspects of flexibility. Regions including anterior temporal cortex and left angular gyrus respond more strongly when semantic retrieval follows highly-related concepts or multiple convergent cues; the multivariate responses in these regions correspond to context-dependent aspects of meaning. A second network centred on left inferior frontal gyrus and left posterior middle temporal gyrus is associated with controlled semantic retrieval, responding more strongly when weak associations are required or there is more competition between concepts. This semantic control network is linked to creativity and also captures context-dependent aspects of meaning; however, this network specifically shows more similar multivariate responses across trials when association strength is weak, reflecting a common controlled retrieval state when more unusual associations are the focus. Evidence from neuropsychology, fMRI and TMS suggests that this semantic control network is distinct from multiple-demand cortex which supports executive control across domains, although challenging semantic tasks recruit both networks. The semantic control network is juxtaposed between regions of default mode network that might be sufficient for the retrieval of strong semantic relationships and multiple-demand regions in the left hemisphere, suggesting that the large-scale organisation of flexible semantic cognition can be understood in terms of cortical gradients that capture systematic functional transitions that are repeated in temporal, parietal and frontal cortex.
Reverse-Engineering the Cortical Architecture for Controlled Semantic Cognition
Predicting Patterns of Similarity Among Abstract Semantic Relations
In this talk, I will present some data showing that people’s similarity judgments among word pairs reflect distinctions between abstract semantic relations like contrast, cause-effect, or part-whole. Further, the extent that individual participants’ similarity judgments discriminate between abstract semantic relations was linearly associated with both fluid and crystallized verbal intelligence, albeit more strongly with fluid intelligence. Finally, I will compare three models according to their ability to predict these similarity judgments. All models take as input vector representations of individual word meanings, but they differ in their representation of relations: one model does not represent relations at all, a second model represents relations implicitly, and a third model represents relations explicitly. Across the three models, the third model served as the best predictor of human similarity judgments suggesting the importance of explicit relation representation to fully account for human semantic cognition.