Inferences
inferences
Spatially-embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings
Brain networks exist within the confines of resource limitations. As a result, a brain network must overcome metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing. To observe the effect of these processes, we introduce the spatially-embedded recurrent neural network (seRNN). seRNNs learn basic task-related inferences while existing within a 3D Euclidean space, where the communication of constituent neurons is constrained by a sparse connectome. We find that seRNNs, similar to primate cerebral cortices, naturally converge on solving inferences using modular small-world networks, in which functionally similar units spatially configure themselves to utilize an energetically-efficient mixed-selective code. As all these features emerge in unison, seRNNs reveal how many common structural and functional brain motifs are strongly intertwined and can be attributed to basic biological optimization processes. seRNNs can serve as model systems to bridge between structural and functional research communities to move neuroscientific understanding forward.
A predictive-processing account of psychosis
There has been increasing interest in the neurocomputational mechanisms underlying psychotic disorders in recent years. One promising approach is based on the theoretical framework of predictive processing, which proposes that inferences regarding the state of the world are made by combining prior beliefs with sensory signals. Delusions and hallucinations are the core symptoms of psychosis and often co-occur. Yet, different predictive-processing alterations have been proposed for these two symptom dimensions, according to which the relative weighting of prior beliefs in perceptual inference is decreased or increased, respectively. I will present recent behavioural, neuroimaging, and computational work that investigated perceptual decision-making under uncertainty and ambiguity to elucidate the changes in predictive processing that may give rise to psychotic experiences. Based on the empirical findings presented, I will provide a more nuanced predictive-processing account that suggests a common mechanism for delusions and hallucinations at low levels of the predictive-processing hierarchy, but still has the potential to reconcile apparently contradictory findings in the literature. This account may help to understand the heterogeneity of psychotic phenomenology and explain changes in symptomatology over time.
Children’s inference of verb meanings: Inductive, analogical and abductive inference
Children need inference in order to learn the meanings of words. They must infer the referent from the situation in which a target word is said. Furthermore, to be able to use the word in other situations, they also need to infer what other referents the word can be generalized to. As verbs refer to relations between arguments, verb learning requires relational analogical inference, something which is challenging to young children. To overcome this difficulty, young children recruit a diverse range of cues in their inference of verb meanings, including, but not limited to, syntactic cues and social and pragmatic cues as well as statistical cues. They also utilize perceptual similarity (object similarity) in progressive alignment to extract relational verb meanings and further to gain insights about relational verb meanings. However, just having a list of these cues is not useful: the cues must be selected, combined, and coordinated to produce the optimal interpretation in a particular context. This process involves abductive reasoning, similar to what scientists do to form hypotheses from a range of facts or evidence. In this talk, I discuss how children use a chain of inferences to learn meanings of verbs. I consider not only the process of analogical mapping and progressive alignment, but also how children use abductive inference to find the source of analogy and gain insights into the general principles underlying verb learning. I also present recent findings from my laboratory that show that prelinguistic human infants use a rudimentary form of abductive reasoning, which enables the first step of word learning.
Beyond the binding problem: From basic affordances to symbolic thought
Human cognitive abilities seem qualitatively different from the cognitive abilities of other primates, a difference Penn, Holyoak, and Povinelli (2008) attribute to role-based relational reasoning—inferences and generalizations based on the relational roles to which objects (and other relations) are bound, rather than just the features of the objects themselves. Role-based relational reasoning depends on the ability to dynamically bind arguments to relational roles. But dynamic binding cannot be sufficient for relational thinking: Some non-human animals solve the dynamic binding problem, at least in some domains; and many non-human species generalize affordances to completely novel objects and scenes, a kind of universal generalization that likely depends on dynamic binding. If they can solve the dynamic binding problem, then why can they not reason about relations? What are they missing? I will present simulations with the LISA model of analogical reasoning (Hummel & Holyoak, 1997, 2003) suggesting that the missing pieces are multi-role integration (the capacity to combine multiple role bindings into complete relations) and structure mapping (the capacity to map different systems of role bindings onto one another). When LISA is deprived of either of these capacities, it can still generalize affordances universally, but it cannot reason symbolically; granted both abilities, LISA enjoys the full power of relational (symbolic) thought. I speculate that one reason it may have taken relational reasoning so long to evolve is that it required evolution to solve both problems simultaneously, since neither multi-role integration nor structure mapping appears to confer any adaptive advantage over simple role binding on its own.
Understanding "why": The role of causality in cognition
Humans have a remarkable ability to figure out what happened and why. In this talk, I will shed light on this ability from multiple angles. I will present a computational framework for modeling causal explanations in terms of counterfactual simulations, and several lines of experiments testing this framework in the domain of intuitive physics. The model predicts people's causal judgments about a variety of physical scenes, including dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causal responsibility for a system's stability. It also captures the cognitive processes underlying these judgments as revealed by spontaneous eye-movements. More recently, we have applied our computational framework to explain multisensory integration. I will show how people's inferences about what happened are well-accounted for by a model that integrates visual and auditory evidence through approximate physical simulations.
Structure-mapping in Human Learning
Across species, humans are uniquely able to acquire deep relational systems of the kind needed for mathematics, science, and human language. Analogical comparison processes are a major contributor to this ability. Analogical comparison engages a structure-mapping process (Gentner, 1983) that fosters learning in at least three ways: first, it highlights common relational systems and thereby promotes abstraction; second, it promotes inferences from known situations to less familiar situations; and, third, it reveals potentially important differences between examples. In short, structure-mapping is a domain-general learning process by which abstract, portable knowledge can arise from experience. It is operative from early infancy on, and is critical to the rapid learning we see in human children. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning.
Precision and Temporal Stability of Directionality Inferences from Group Iterative Multiple Model Estimation (GIMME) Brain Network Models
The Group Iterative Multiple Model Estimation (GIMME) framework has emerged as a promising method for characterizing connections between brain regions in functional neuroimaging data. Two of the most appealing features of this framework are its ability to estimate the directionality of connections between network nodes and its ability to determine whether those connections apply to everyone in a sample (group-level) or just to one person (individual-level). However, there are outstanding questions about the validity and stability of these estimates, including: 1) how recovery of connection directionality is affected by features of data sets such as scan length and autoregressive effects, which may be strong in some imaging modalities (resting state fMRI, fNIRS) but weaker in others (task fMRI); and 2) whether inferences about directionality at the group and individual levels are stable across time. This talk will provide an overview of the GIMME framework and describe relevant results from a large-scale simulation study that assesses directionality recovery under various conditions and a separate project that investigates the temporal stability of GIMME’s inferences in the Human Connectome Project data set. Analyses from these projects demonstrate that estimates of directionality are most precise when autoregressive and cross-lagged relations in the data are relatively strong, and that inferences about the directionality of group-level connections, specifically, appear to be stable across time. Implications of these findings for the interpretation of directional connectivity estimates in different types of neuroimaging data will be discussed.
Using Developmental Trajectories to Understand Change in Children’s Analogical Reasoning
Analogical reasoning is a complex ‘high-level’ cognitive process characterised by making inferences based on analogical comparisons. As with other high-level processes, development takes place over a protracted time period and believed to result from changes in multiple ‘lower-level’ systems. In the case of analogical reasoning, changes in systems responsible for conceptual knowledge, task knowledge, inhibition, and working memory have all been causally implicated in development. Whilst there is evidence that each of these systems contributes to development, what the relative contribution of each across development is, and how they interact with each, remain largely unanswered questions. In this presentation, I will describe how cross-sectional trajectory analysis can be used as a complementary method to shed light on these questions.
Cortical-like dynamics in recurrent circuits optimized for sampling-based probabilistic inference
Sensory cortices display a suite of ubiquitous dynamical features, such as ongoing noise variability, transient overshoots, and oscillations, that have so far escaped a common, principled theoretical account. We developed a unifying model for these phenomena by training a recurrent excitatory-inhibitory neural circuit model of a visual cortical hypercolumn to perform sampling-based probabilistic inference. The optimized network displayed several key biological properties, including divisive normalization, as well as stimulus-modulated noise variability, inhibition-dominated transients at stimulus onset, and strong gamma oscillations. These dynamical features had distinct functional roles in speeding up inferences and made predictions that we confirmed in novel analyses of awake monkey recordings. Our results suggest that the basic motifs of cortical dynamics emerge as a consequence of the efficient implementation of the same computational function — fast sampling-based inference — and predict further properties of these motifs that can be tested in future experiments