Semantics
semantics
Reading Scenes
Dyslexias in words and numbers
Investigating semantics above and beyond language: a clinical and cognitive neuroscience approach
The ability to build, store, and manipulate semantic representations lies at the core of all our (inter)actions. Combining evidence from cognitive neuroimaging and experimental neuropsychology, I study the neurocognitive correlates of semantic knowledge in relation to other cognitive functions, chiefly language. In this talk, I will start by reviewing neuroimaging findings supporting the idea that semantic representations are encoded in distributed yet specialized cortical areas (1), and rapidly recovered (2) according to the requirement of the task at hand (3). I will then focus on studies conducted in neurodegenerative patients, offering a unique window on the key role played by a structurally and functionally heterogeneous piece of cortex: the anterior temporal lobe (4,5). I will present pathological, neuroimaging, cognitive, and behavioral data illustrating how damages to language-related networks can affect or spare semantic knowledge as well as possible paths to functional compensation (6,7). Time permitting, we will discuss the neurocognitive dissociation between nouns and verbs (8) and how verb production is differentially impacted by specific language impairments (9).
Bridging clinical and cognitive neuroscience together to investigate semantics, above and beyond language
We will explore how neuropsychology can be leveraged to directly test cognitive neuroscience theories using the case of frontotemporal dementias affecting the language network. Specifically, we will focus on pathological, neuroimaging, and cognitive data from primary progressive aphasia. We will see how they can help us investigate the reading network, semantic knowledge organisation, and grammatical categories processing. Time permitting, the end of the talk will cover the temporal dynamics of semantic dimensions recovery and the role played by the task.
What do neurons want?
Functional segregation of rostral and caudal hippocampus in associative memory
It has long been established that the hippocampus plays a crucial role for episodic memory. As opposed to the modular approach, now it is generally assumed that being a complex structure, the HC performs multiplex interconnected functions, whose hierarchical organization provides basis for the higher cognitive functions such as semantics-based encoding and retrieval. However, the «where, when and how» properties of distinct memory aspects within and outside the HC are still under debate. Here we used a visual associative memory task as a probe to test the hypothesis about the differential involvement of the rostral and caudal portions of the human hippocampus in memory encoding, recognition and associative recall. In epilepsy patients implanted with stereo-EEG, we show that at retrieval the rostral HC is selectively active for recognition memory, whereas the caudal HC is selectively active for the associative memory. Low frequency desynchronization and high frequency synchronization characterize the temporal dynamic in encoding and retrieval. Therefore, we describe here anatomical segregation in the hippocampal contributions to associative and recognition memory.
The attentional requirement of unconscious processing
The tight relationship between attention and conscious perception has been extensively researched in the past decades. However, whether attentional modulation extended to unconscious processes remained largely unknown, particularly when it came to abstract and high-level processing. I will talk about a recent study where we utilized the Stroop paradigm to show that task load gates unconscious semantic processing. In a series of psychophysical experiments, the unconscious word semantics influenced conscious task performance only under the low task load condition, but not the high task load condition. Intriguingly, with enough practice in the high task load condition, the unconscious effect reemerged. These findings suggest a competition of attentional resources between unconscious and conscious processes, challenging the automaticity account of unconscious processing.
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of the Guided Search model of visual search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. Finally, in Part 3, we will consider the internal representation of what we are searching for; what is often called “the search template”. That search template is really two templates: a guiding template (probably in working memory) and a target template (in long term memory). Put these pieces together and you have GS6.
Analogical encodings and recodings
This talk will focus on the idea that the kind of similarity driving analogical retrieval is determined by the kind of features encoded regarding the source and the target cue situations. Emphasis will be put on educational perspectives in order to show the influence of world semantics on learners’ problem representations and solving strategies, as well as the difficulties arising from semantic incongruence between representations and strategies. Special attention will be given to the recoding of semantically incongruent representations, a crucial step that learners struggle with, in order to illustrate a promising path for going beyond informal strategies.
How do we find what we are looking for? The Guided Search 6.0 model
The talk will give a tour of Guided Search 6.0 (GS6), the latest evolution of Guided Search. Part 1 describes The Mechanics of Search. Because we cannot recognize more than a few items at a time, selective attention is used to prioritize items for processing. Selective attention to an item allows its features to be bound together into a representation that can be matched to a target template in memory or rejected as a distractor. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 msec/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid serial and parallel model. If a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Part 2 elaborates on the five sources of Guidance that are combined into a spatial “priority map” to guide the deployment of attention (hence “guided search”). These are (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g. priming), (4) reward, and (5) scene syntax and semantics. In GS6, the priority map is a dynamic attentional landscape that evolves over the course of search. In part, this is because the visual field is inhomogeneous. Part 3: That inhomogeneity imposes spatial constraints on search that described by three types of “functional visual field” (FVFs): (1) a resolution FVF, (2) an FVF governing exploratory eye movements, and (3) an FVF governing covert deployments of attention. Finally, in Part 4, we will consider that the internal representation of the search target, the “search template” is really two templates: a guiding template and a target template. Put these pieces together and you have GS6.
Logical Neural Networks
The work to be presented in this talk proposes a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning). Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly interpretable disentangled representation. Inference is omnidirectional rather than focused on predefined target variables, and corresponds to logical reasoning, including classical first-order logic theorem proving as a special case. The model is end-to-end differentiable, and learning minimizes a novel loss function capturing logical contradiction, yielding resilience to inconsistent knowledge. It also enables the open-world assumption by maintaining bounds on truth values which can have probabilistic semantics, yielding resilience to incomplete knowledge.
Predicting Patterns of Similarity Among Abstract Semantic Relations
In this talk, I will present some data showing that people’s similarity judgments among word pairs reflect distinctions between abstract semantic relations like contrast, cause-effect, or part-whole. Further, the extent that individual participants’ similarity judgments discriminate between abstract semantic relations was linearly associated with both fluid and crystallized verbal intelligence, albeit more strongly with fluid intelligence. Finally, I will compare three models according to their ability to predict these similarity judgments. All models take as input vector representations of individual word meanings, but they differ in their representation of relations: one model does not represent relations at all, a second model represents relations implicitly, and a third model represents relations explicitly. Across the three models, the third model served as the best predictor of human similarity judgments suggesting the importance of explicit relation representation to fully account for human semantic cognition.
Schemas: events, spaces, semantics, and development
Understanding and remembering realistic experiences in our everyday lives requires activating many kinds of structured knowledge about the world, including spatial maps, temporal event scripts, and semantic relationships. My recent projects have explored the ways in which we build up this schematic knowledge (during a single experiment and across developmental timescales) and can strategically deploy them to construct event representations that we can store in memory or use to make predictions. I will describe my lab's ongoing work developing new experimental and analysis techniques for conducting functional MRI experiments using narratives, movies, poetry, virtual reality, and "memory experts" to study complex naturalistic schemas.