Prior Knowledge
prior knowledge
Meta-learning functional plasticity rules in neural networks
Synaptic plasticity is known to be a key player in the brain’s life-long learning abilities. However, due to experimental limitations, the nature of the local changes at individual synapses and their link with emerging network-level computations remain unclear. I will present a numerical, meta-learning approach to deduce plasticity rules from either neuronal activity data and/or prior knowledge about the network's computation. I will first show how to recover known rules, given a human-designed loss function in rate networks, or directly from data, using an adversarial approach. Then I will present how to scale-up this approach to recurrent spiking networks using simulation-based inference.
Learning by Analogy in Mathematics
Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.
A model of colour appearance based on efficient coding of natural images
An object’s colour, brightness and pattern are all influenced by its surroundings, and a number of visual phenomena and “illusions” have been discovered that highlight these often dramatic effects. Explanations for these phenomena range from low-level neural mechanisms to high-level processes that incorporate contextual information or prior knowledge. Importantly, few of these phenomena can currently be accounted for when measuring an object’s perceived colour. Here we ask to what extent colour appearance is predicted by a model based on the principle of coding efficiency. The model assumes that the image is encoded by noisy spatio-chromatic filters at one octave separations, which are either circularly symmetrical or oriented. Each spatial band’s lower threshold is set by the contrast sensitivity function, and the dynamic range of the band is a fixed multiple of this threshold, above which the response saturates. Filter outputs are then reweighted to give equal power in each channel for natural images. We demonstrate that the model fits human behavioural performance in psychophysics experiments, and also primate retinal ganglion responses. Next we systematically test the model’s ability to qualitatively predict over 35 brightness and colour phenomena, with almost complete success. This implies that contrary to high-level processing explanations, much of colour appearance is potentially attributable to simple mechanisms evolved for efficient coding of natural images, and is a basis for modelling the vision of humans and other animals.
NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks
The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.
Categories, language, and visual working memory: how verbal labels change capacity limitations
The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.
Comparing Multiple Strategies to Improve Mathematics Learning and Teaching
Comparison is a powerful learning process that improves learning in many domains. For over 10 years, my colleagues and I have researched how we can use comparison to support better learning of school mathematics within classroom settings. In 5 short-term experimental, classroom-based studies, we evaluated comparison of solution methods for supporting mathematics knowledge and tested whether prior knowledge impacted effectiveness. We next developed supplemental Algebra I curriculum and professional development for teachers to integrate Comparison and Explanation of Multiple Strategies (CEMS) in their classrooms and tested the promise of the approach when implemented by teachers in two studies. Benefits and challenges emerged in these studies. I will conclude with evidence-based guidelines for effectively supporting comparison and explanation in the classroom. Overall, this program of research illustrates how cognitive science research can guide the design of effective educational materials as well as challenges that occur when bridging from cognitive science research to classroom instruction.
Neural dynamics underlying temporal inference
Animals possess the ability to effortlessly and precisely time their actions even though information received from the world is often ambiguous and is inadvertently transformed as it passes through the nervous system. With such uncertainty pervading through our nervous systems, we could expect that much of human and animal behavior relies on inference that incorporates an important additional source of information, prior knowledge of the environment. These concepts have long been studied under the framework of Bayesian inference with substantial corroboration over the last decade that human time perception is consistent with such models. We, however, know little about the neural mechanisms that enable Bayesian signatures to emerge in temporal perception. I will present our work on three facets of this problem, how Bayesian estimates are encoded in neural populations, how these estimates are used to generate time intervals, and how prior knowledge for these tasks is acquired and optimized by neural circuits. We trained monkeys to perform an interval reproduction task and found their behavior to be consistent with Bayesian inference. Using insights from electrophysiology and in silico models, we propose a mechanism by which cortical populations encode Bayesian estimates and utilize them to generate time intervals. Thereafter, I will present a circuit model for how temporal priors can be acquired by cerebellar machinery leading to estimates consistent with Bayesian theory. Based on electrophysiology and anatomy experiments in rodents, I will provide some support for this model. Overall, these findings attempt to bridge insights from normative frameworks of Bayesian inference with potential neural implementations for the acquisition, estimation, and production of timing behaviors.
A computational explanation for domain specificity in the human brain
Many regions of the human brain conduct highly specific functions, such as recognizing faces, understanding language, and thinking about other people’s thoughts. Why might this domain specific organization be a good design strategy for brains, and what is the origin of domain specificity in the first place? In this talk, I will present recent work testing whether the segregation of face and object perception in human brains emerges naturally from an optimization for both tasks. We trained artificial neural networks on face and object recognition, and found that networks were able to perform both tasks well by spontaneously segregating them into distinct pathways. Critically, networks neither had prior knowledge nor any inductive bias about the tasks. Furthermore, networks optimized on tasks which apparently do not develop specialization in the human brain, such as food or cars, and object categorization showed less task segregation. These results suggest that functional segregation can spontaneously emerge without a task-specific bias, and that the domain-specific organization of the cortex may reflect a computational optimization for the real-world tasks humans solve.
Evaluating different facets of category status for promoting spontaneous transfer
Existing accounts of analogical transfer highlight the importance of comparison-based schema abstraction in aiding retrieval of relevant prior knowledge from memory. In this talk, we discuss an alternative view, the category status hypothesis—which states that if knowledge of a target principle is represented as a relational category, it is easier to activate as a result of categorizing (as opposed to cue-based reminding)—and briefly review supporting evidence. We then further investigate this hypothesis by designing study tasks that promote different facets of category-level representations and assess their impact on spontaneous analogical transfer. A Baseline group compared two analogous cases; the remaining groups experienced comparison plus another task intended to impact the category status of the knowledge representation. The Intension group read an abstract statement of the principle with a supporting task of generating a new case. The Extension group read two more positive cases with the task of judging whether each exemplified the target principle. The Mapping group read a contrast case with the task of revising it into a positive example of the target principle (thereby providing practice moving in both directions between type and token, i.e., evaluating a given case relative to knowledge and using knowledge to generate a revised case). The results demonstrated that both Intension and Extension groups led to transfer improvements over Baseline (with the former demonstrating both improved accessibility of prior knowledge and ability to apply relational concepts). Implications for theories of analogical transfer are discussed.
Time perception: how our judgment of time is influenced by the regularity and change in stimulus distribution?
To organize various experiences in a coherent mental representation, we need to properly estimate the duration and temporal order of different events. Yet, our perception of time is noisy and vulnerable to various illusions. Studying these illusions can elucidate the mechanism by which the brain perceives time. In this talk, I will review a few studies on how the brain perceives duration of events and the temporal order between self-generated motion and sensory feedback. Combined with computational models at different levels, these experiments illustrated that the brain incorporates the prior knowledge of the statistical distribution of the duration of stimuli and the decay of memory when estimating duration of an individual event, and adjusts its perception of temporal order to changes in the statistics of the environment.
It’s not what you look at that matters, it’s what you see
People frequently interpret the same information differently, based on their prior beliefs and views. This may occur in everyday settings, as when two friends are watching the same movie, but also in more consequential circumstances, such as when people interpret the same news differently based on their political views. The role of subjective knowledge in altering how the brain processes narratives has been explored mainly in controlled settings. I will present two projects that examines neural mechanisms underlying narrative interpretation “in the wild” -- how responses differ between two groups of people who interpret the same narrative in two coherent, but opposing ways. In the first project we manipulated participant’s prior knowledge to make them interpret the narrative differently, and found that responses in high-order areas, including the default mode network, language areas and subsets of the mirror neuron system, tend to be similar among people who share the same interpretation, but different from people with an opposing interpretation. In contrast to the active manipulation of participants’ interpretation in the first study, in the second (ongoing) project we examine these processes in a more ecological setting. Taking advantage of people’s natural tendencies to interpret the world through their own (political) filters, we examine these mechanisms while measuring their brain response to political movie clips. These studies are intended to deepen our understanding of the differences in subjective construal processes, by mapping their underlying brain mechanisms.
Relational Reasoning in Curricular Knowledge Components
It is a truth universally acknowledged that relational reasoning is important for learning in Science, Technology, Engineering, and Mathematics (STEM) disciplines. However, much research on relational reasoning uses examples unrelated to STEM concepts (understandably, to control for prior knowledge in many cases). In this talk I will discuss how real STEM concepts can be profitably used in relational reasoning research, using fraction concepts in mathematics as an example.