similarity
Latest
Spatial matching tasks for insect minds: relational similarity in bumblebees
Understanding what makes human unique is a fundamental research drive for comparative psychologists. Cognitive abilities such as theory of mind, cooperation or mental time travel have been considered uniquely human. Despite empirical evidence showing that animals other than humans are able (to some extent) of these cognitive achievements, findings are still heavily contested. In this context, being able to abstract relations of similarity has also been considered one of the hallmarks of human cognition. While previous research has shown that other animals (e.g., primates) can attend to relational similarity, less is known about what invertebrates can do. In this talk, I will present a series of spatial matching tasks that previously were used with children and great apes and that I adapted for use with wild-caught bumblebees. The findings from these studies suggest striking similarities between vertebrates and invertebrates in their abilities to attend to relational similarity.
Explaining an asymmetry in similarity and difference judgments
Explicit similarity judgments tend to emphasize relational information more than do difference judgments. In this talk, I propose and test the hypothesis that this asymmetry arises because human reasoners represent the relation different as the negation of the relation same (i.e., as not-same). This proposal implies that processing difference is more cognitively demanding than processing similarity. Both for verbal comparisons between word pairs, and for visual comparisons between sets of geometric shapes, participants completed a triad task in which they selected which of two options was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard, either by virtue of featural similarity or by virtue of relational similarity. On ambiguous trials, one option was more featurally similar (but less relationally similar) to the standard, whereas the other was more relationally similar (but less featurally similar). Given the higher cognitive complexity of assessing relational similarity, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty accurately detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments. These results provide convergent evidence for a representational asymmetry between the relations same and different.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Implications of Vector-space models of Relational Concepts
Vector-space models are used frequently to compare similarity and dimensionality among entity concepts. What happens when we apply these models to relational concepts? What is the evidence that such models do apply to relational concepts? If we use such a model, then one implication is that maximizing surface feature variation should improve relational concept learning. For example, in STEM instruction, the effectiveness of teaching by analogy is often limited by students’ focus on superficial features of the source and target exemplars. However, in contrast to the prediction of the vector-space computational model, the strategy of progressive alignment (moving from perceptually similar to different targets) has been suggested to address this issue (Gentner & Hoyos, 2017), and human behavioral evidence has shown benefits from progressive alignment. Here I will present some preliminary data that supports the computational approach. Participants were explicitly instructed to match stimuli based on relations while perceptual similarity of stimuli varied parametrically. We found that lower perceptual similarity reduced accurate relational matching. This finding demonstrates that perceptual similarity may interfere with relational judgements, but also hints at why progressive alignment maybe effective. These are preliminary, exploratory data and I to hope receive feedback on the framework and to start a discussion in a group on the utility of vector-space models for relational concepts in general.
Roots of Analogy
Can nonhuman animals perceive the relation-between-relations? This intriguing question has been studied over the last 40 years; nonetheless, the extent to which nonhuman species can do so remains controversial. Here, I review empirical evidence suggesting that pigeons, parrots, crows, and baboons join humans in reliably acquiring and transferring relational matching-to-sample (RMTS). Many theorists consider that RMTS captures the essence of analogy, because basic to analogy is appreciating the ‘relation between relations.’ Factors affecting RMTS performance include: prior training experience, the entropy of the sample stimulus, and whether the items that serve as sample stimuli can also serve as choice stimuli.
Signal in the Noise: models of inter-trial and inter-subject neural variability
The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).
Semantic Distance and Beyond: Interacting Predictors of Verbal Analogy Performance
Prior studies of A:B::C:D verbal analogies have identified several factors that affect performance, including the semantic similarity between source and target domains (semantic distance), the semantic association between the C-term and incorrect answers (distracter salience), and the type of relations between word pairs (e.g., categorical, compositional, and causal). However, it is unclear how these stimulus properties affect performance when utilized together. Moreover, how do these item factors interact with individual differences such as crystallized intelligence and creative thinking? Several studies reveal interactions among these item and individual difference factors impacting verbal analogy performance. For example, a three-way interaction demonstrated that the effects of semantic distance and distracter salience had a greater impact on performance for compositional and causal relations than for categorical ones (Jones, Kmiecik, Irwin, & Morrison, 2022). Implications for analogy theories and future directions are discussed.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
Multi-modal biomarkers improve prediction of memory function in cognitively unimpaired older adults
Identifying biomarkers that predict current and future cognition may improve estimates of Alzheimer’s disease risk among cognitively unimpaired older adults (CU). In vivo measures of amyloid and tau protein burden and task-based functional MRI measures of core memory mechanisms, such as the strength of cortical reinstatement during remembering, have each been linked to individual differences in memory in CU. This study assesses whether combining CSF biomarkers with fMRI indices of cortical reinstatement improves estimation of memory function in CU, assayed using three unique tests of hippocampal-dependent memory. Participants were 158 CU (90F, aged 60-88 years, CDR=0) enrolled in the Stanford Aging and Memory Study (SAMS). Cortical reinstatement was quantified using multivoxel pattern analysis of fMRI data collected during completion of a paired associate cued recall task. Memory was assayed by associative cued recall, a delayed recall composite, and a mnemonic discrimination task that involved discrimination between studied ‘target’ objects, novel ‘foil’ objects, and perceptually similar ‘lure’ objects. CSF Aβ42, Aβ40, and p-tau181 were measured with the automated Lumipulse G system (N=115). Regression analyses examined cross-sectional relationships between memory performance in each task and a) the strength of cortical reinstatement in the Default Network (comprised of posterior medial, medial frontal, and lateral parietal regions) during associative cued recall and b) CSF Aβ42/Aβ40 and p-tau181, controlling for age, sex, and education. For mnemonic discrimination, linear mixed effects models were used to examine the relationship between discrimination (d’) and each predictor as a function of target-lure similarity. Stronger cortical reinstatement was associated with better performance across all three memory assays. Age and higher CSF p-tau181 were each associated with poorer associative memory and a diminished improvement in mnemonic discrimination as target-lure similarity decreased. When combined in a single model, CSF p-tau181 and Default Network reinstatement strength, but not age, explained unique variance in associative memory and mnemonic discrimination performance, outperforming the single-modality models. Combining fMRI measures of core memory functions with protein biomarkers of Alzheimer’s disease significantly improved prediction of individual differences in memory performance in CU. Leveraging multimodal biomarkers may enhance future prediction of risk for cognitive decline.
Analogical Reasoning with Neuro-Symbolic AI
Knowledge discovery with computers requires a huge amount of search. Analogical reasoning is effective for efficient knowledge discovery. Therefore, we proposed analogical reasoning systems based on first-order predicate logic using Neuro-Symbolic AI. Neuro-Symbolic AI is a combination of Symbolic AI and artificial neural networks and has features that are easy for human interpretation and robust against data ambiguity and errors. We have implemented analogical reasoning systems by Neuro-symbolic AI models with word embedding which can represent similarity between words. Using the proposed systems, we efficiently extracted unknown rules from knowledge bases described in Prolog. The proposed method is the first case of analogical reasoning based on the first-order predicate logic using deep learning.
Interpersonal synchrony of body/brain, Solo & Team Flow
Flow is defined as an altered state of consciousness with excessive attention and enormous sense of pleasure, when engaged in a challenging task, first postulated by a psychologist, the late M. Csikszentmihayli. The main focus of this talk will be “Team Flow,” but there were two lines of previous studies in our laboratory as its background. First is inter-body and inter-brain coordination/synchrony between individuals. Considering various rhythmic echoing/synchronization phenomena in animal behavior, it could be regarded as the biological, sub-symbolic and implicit origin of social interactions. The second line of precursor research is on the state of Solo Flow in game playing. We employed attenuation of AEP (Auditory Evoked Potential) to task-irrelevant sound probes as an objective-neural indicator of such a Flow status, and found that; 1) Mutual link between the ACC & the TP is critical, and 2) overall, top-down influence is enhanced while bottom-up causality is attenuated. Having these as the background, I will present our latest study of Team Flow in game playing. We found that; 3) the neural correlates of Team Flow is distinctively different from those of Solo Flow nor of non-flow social, 4) the left medial temporal cortex seems to form an integrative node for Team Flow, receiving input related to Solo Flow state from the right PFC and input related to social state from the right IFC, and 5) Intra-brain (dis)similarity of brain activity well predicts (dis)similarity of skills/cognition as well as affinity for inter-brain coherence.
NMC4 Short Talk: Rank similarity filters for computationally-efficient machine learning on high dimensional data
Real world datasets commonly contain nonlinearly separable classes, requiring nonlinear classifiers. However, these classifiers are less computationally efficient than their linear counterparts. This inefficiency wastes energy, resources and time. We were inspired by the efficiency of the brain to create a novel type of computationally efficient Artificial Neural Network (ANN) called Rank Similarity Filters. They can be used to both transform and classify nonlinearly separable datasets with many datapoints and dimensions. The weights of the filters are set using the rank orders of features in a datapoint, or optionally the 'confusion' adjusted ranks between features (determined from their distributions in the dataset). The activation strength of a filter determines its similarity to other points in the dataset, a measure based on cosine similarity. The activation of many Rank Similarity Filters transforms samples into a new nonlinear space suitable for linear classification (Rank Similarity Transform (RST)). We additionally used this method to create the nonlinear Rank Similarity Classifier (RSC), which is a fast and accurate multiclass classifier, and the nonlinear Rank Similarity Probabilistic Classifier (RSPC), which is an extension to the multilabel case. We evaluated the classifiers on multiple datasets and RSC is competitive with existing classifiers but with superior computational efficiency. Code for RST, RSC and RSPC is open source and was written in Python using the popular scikit-learn framework to make it easily accessible (https://github.com/KatharineShapcott/rank-similarity). In future extensions the algorithm can be applied to hardware suitable for the parallelization of an ANN (GPU) and a Spiking Neural Network (neuromorphic computing) with corresponding performance gains. This makes Rank Similarity Filters a promising biologically inspired solution to the problem of efficient analysis of nonlinearly separable data.
NMC4 Short Talk: Untangling Contributions of Distinct Features of Images to Object Processing in Inferotemporal Cortex
How do humans perceive daily objects of various features and categorize these seemingly intuitive and effortless mental representations? Prior literature focusing on the role of the inferotemporal region (IT) has revealed object category clustering that is consistent with the semantic predefined structure (superordinate, ordinate, subordinate). It has however been debated whether the neural signals in the IT regions are a reflection of such categorical hierarchy [Wen et al.,2018; Bracci et al., 2017]. Visual attributes of images that correlated with semantic and category dimensions may have confounded these prior results. Our study aimed to address this debate by building and comparing models using the DNN AlexNet, to explain the variance in representational dissimilarity matrix (RDM) of neural signals in the IT region. We found that mid and high level perceptual attributes of the DNN model contribute the most to neural RDMs in the IT region. Semantic categories, as in predefined structure, were moderately correlated with mid to high DNN layers (r = [0.24 - 0.36]). Variance partitioning analysis also showed that the IT neural representations were mostly explained by DNN layers, while semantic categorical RDMs brought little additional information. In light of these results, we propose future works should focus more on the specific role IT plays in facilitating the extraction and coding of visual features that lead to the emergence of categorical conceptualizations.
Achieving Abstraction: Early Competence & the Role of the Learning Context
Children's emerging ability to acquire and apply relational same-different concepts is often cited as a defining feature of human cognition, providing the foundation for abstract thought. Yet, young learners often struggle to ignore irrelevant surface features to attend to structural similarity instead. I will argue that young children have--and retain--genuine relational concepts from a young age, but tend to neglect abstract similarity due to a learned bias to attend to objects and their properties. Critically, this account predicts that differences in the structure of children's environmental input should lead to differences in the type of hypotheses they privilege and apply. I will review empirical support for this proposal that has (1) evaluated the robustness of early competence in relational reasoning, (2) identified cross-cultural differences in relational and object bias, and (3) provided evidence that contextual factors play a causal role in relational reasoning. Together, these studies suggest that the development of abstract thought may be more malleable and context-sensitive than initially believed.
A brain circuit for curiosity
Motivational drives are internal states that can be different even in similar interactions with external stimuli. Curiosity as the motivational drive for novelty-seeking and investigating the surrounding environment is for survival as essential and intrinsic as hunger. Curiosity, hunger, and appetitive aggression drive three different goal-directed behaviors—novelty seeking, food eating, and hunting— but these behaviors are composed of similar actions in animals. This similarity of actions has made it challenging to study novelty seeking and distinguish it from eating and hunting in nonarticulating animals. The brain mechanisms underlying this basic survival drive, curiosity, and novelty-seeking behavior have remained unclear. In spite of having well-developed techniques to study mouse brain circuits, there are many controversial and different results in the field of motivational behavior. This has left the functions of motivational brain regions such as the zona incerta (ZI) still uncertain. Not having a transparent, nonreinforced, and easily replicable paradigm is one of the main causes of this uncertainty. Therefore, we chose a simple solution to conduct our research: giving the mouse freedom to choose what it wants—double freeaccess choice. By examining mice in an experimental battery of object free-access double-choice (FADC) and social interaction tests—using optogenetics, chemogenetics, calcium fiber photometry, multichannel recording electrophysiology, and multicolor mRNA in situ hybridization—we uncovered a cell type–specific cortico-subcortical brain circuit of the curiosity and novelty-seeking behavior. We found in mice that inhibitory neurons in the medial ZI (ZIm) are essential for the decision to investigate an object or a conspecific. These neurons receive excitatory input from the prelimbic cortex to signal the initiation of exploration. This signal is modulated in the ZIm by the level of investigatory motivation. Increased activity in the ZIm instigates deep investigative action by inhibiting the periaqueductal gray region. A subpopulation of inhibitory ZIm neurons expressing tachykinin 1 (TAC1) modulates the investigatory behavior.
Analogical encodings and recodings
This talk will focus on the idea that the kind of similarity driving analogical retrieval is determined by the kind of features encoded regarding the source and the target cue situations. Emphasis will be put on educational perspectives in order to show the influence of world semantics on learners’ problem representations and solving strategies, as well as the difficulties arising from semantic incongruence between representations and strategies. Special attention will be given to the recoding of semantically incongruent representations, a crucial step that learners struggle with, in order to illustrate a promising path for going beyond informal strategies.
Zero-shot visual reasoning with probabilistic analogical mapping
There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.
Probabilistic Analogical Mapping with Semantic Relation Networks
Hongjing Lu will present a new computational model of Probabilistic Analogical Mapping (PAM, in collaboration with Nick Ichien and Keith Holyoak) that finds systematic correspondences between inputs generated by machine learning. The model adopts a Bayesian framework for probabilistic graph matching, operating on semantic relation networks constructed from distributed representations of individual concepts (word embeddings created by Word2vec) and of relations between concepts (created by our BART model). We have used PAM to simulate a broad range of phenomena involving analogical mapping by both adults and children. Our approach demonstrates that human-like analogical mapping can emerge from comparison mechanisms applied to rich semantic representations of individual concepts and relations. More details can be found https://arxiv.org/ftp/arxiv/papers/2103/2103.16704.pdf
A role for cognitive maps in metaphors and analogy?
In human and non-human animals, conceptual knowledge is partially organized according to low-dimensional geometries that rely on brain structures and computations involved in spatial representations. Recently, two separate lines of research have investigated cognitive maps, that are associated with the hippocampal formation and are similar to world-centered representations of the environment, and image spaces, that are associated with the parietal cortex and are similar to self-centered spatial relationships. I will suggest that cognitive maps and image spaces may be two manifestations of a more general propensity of the mind to create low-dimensional internal models, and may play a role in analogical reasoning and metaphorical thinking. Finally, I will show some data suggesting that the metaphorical relationship between colors and emotions can be accounted for by the structural alignment of low-dimensional conceptual spaces.
A Connectionist Account of Analogy-Making
Analogy-making is considered to be one of the cognitive processes which are hard to be accounted for in connectionist terms. A number of models have been proposed, but they are either tailed for specific analogical tasks or require complicated mechanisms which don’t fit into the mainstream connectionist modelling paradigm. In this talk I will present a new connectionist account of analogy-making based on the vector approach to representing symbols (VARS). This approach allows representing relational structures of varying complexity by numeric vectors with fixed dimensionality. I will also present a simple and computationally efficient mechanism of aligning VARS representations, which integrates both semantic similarity and structural constraints. The results of a series of simulations will demonstrate that VARS can account for basic analogical phenomena.
Abstract Semantic Relations in Mind, Brain, and Machines
Abstract semantic relations (e.g., category membership, part-whole, antonymy, cause-effect) are central to human intelligence, underlying the distinctively human ability to reason by analogy. I will describe a computational project (Bayesian Analogy with Relational Transformations) that aims to extract explicit representations of abstract semantic relations from non-relational inputs automatically generated by machine learning. BART’s representations predict patterns of typicality and similarity for semantic relations, as well as similarity of neural signals triggered by semantic relations during analogical reasoning. In this approach, analogy emerges from the ability to learn and compare relations; mapping emerges later from the ability to compare patterns of relations.
Is Rule Learning Like Analogy?
Humans’ ability to perceive and abstract relational structure is fundamental to our learning. It allows us to acquire knowledge all the way from linguistic grammar to spatial knowledge to social structures. How does a learner begin to perceive structure in the world? Why do we sometimes fail to see structural commonalities across events? To begin to answer these questions, I attempt to bridge two large, yet somewhat separate research traditions in understanding human’s structural abstraction: rule learning (Marcus et al., 1999) and analogical learning (Gentner, 1989). On the one hand, rule learning research has shown humans’ domain-general ability and ease—as early as 7-month-olds—to abstract structure from a limited experience. On the other hand, analogical learning works have shown robust constraints in structural abstraction: young learners prefer object similarity over relational similarity. To understand this seeming paradox between ease and difficulty, we conducted a series of studies using the classic rule learning paradigm (Marcus et al., 1999) but with an analogical (object vs. relation) twist. Adults were presented with 2-minute sentences or events (syllables or shapes) containing a rule. At test, they had to choose between rule abstraction and object matches—the same syllable or shape they saw before. Surprisingly, while in the absence of object matches adults were perfectly capable of abstracting the rule, their ability to do so declined sharply when object matches were present. Our initial results suggest that rule learning ability may be subject to the usual constraints and signatures of analogical learning: preference to object similarity can dampen rule generalization. Humans’ abstraction is also concrete at the same time.
Predicting Patterns of Similarity Among Abstract Semantic Relations
In this talk, I will present some data showing that people’s similarity judgments among word pairs reflect distinctions between abstract semantic relations like contrast, cause-effect, or part-whole. Further, the extent that individual participants’ similarity judgments discriminate between abstract semantic relations was linearly associated with both fluid and crystallized verbal intelligence, albeit more strongly with fluid intelligence. Finally, I will compare three models according to their ability to predict these similarity judgments. All models take as input vector representations of individual word meanings, but they differ in their representation of relations: one model does not represent relations at all, a second model represents relations implicitly, and a third model represents relations explicitly. Across the three models, the third model served as the best predictor of human similarity judgments suggesting the importance of explicit relation representation to fully account for human semantic cognition.
Identifying task-specific dynamics in recurrent neural networks using Dynamical Similarity Analysis
Bernstein Conference 2024
Representational dissimilarity metric spaces for stochastic neural networks
COSYNE 2023
Implicit generative models using Kernel Similarity Matching
COSYNE 2025
Spectral analysis of representational similarity with limited neurons
COSYNE 2025
What Representational Similarity Measures Imply about Decodable Information
COSYNE 2025
How cognitive control is represented in the brain: an EEG Representational Similarity Analysis study
Assessing similarity in functional brain connectivity patterns of parent-child dyads during mentalizing
FENS Forum 2024
Observer-agent kinematic similarity modulates neural activity in regions of the action observation network
FENS Forum 2024
Reflections of minds? Exploring behavioral and neural similarity in friend dyads
FENS Forum 2024
similarity coverage
33 items
Share your knowledge
Know something about similarity? Help the community by contributing seminars, talks, or research.
Contribute content