Internal Model
internal model
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Generation and use of internal models of the world to guide flexible behavior
A biological model system for studying predictive processing
Despite the increasing recognition of predictive processing in circuit neuroscience, little is known about how it may be implemented in cortical circuits. We set out to develop and characterise a biological model system with layer 5 pyramidal cells in the centre. We aim to gain access to prediction and internal model generating processes by controlling, understanding or monitoring everything else: the sensory environment, feed-forward and feed-back inputs, integrative properties, their spiking activity and output. I’ll show recent work from the lab establishing such a model system both in terms of biology as well as tool development.
NMC4 Keynote: Formation and update of sensory priors in working memory and perceptual decision making tasks
The world around us is complex, but at the same time full of meaningful regularities. We can detect, learn and exploit these regularities automatically in an unsupervised manner i.e. without any direct instruction or explicit reward. For example, we effortlessly estimate the average tallness of people in a room, or the boundaries between words in a language. These regularities and prior knowledge, once learned, can affect the way we acquire and interpret new information to build and update our internal model of the world for future decision-making processes. Despite the ubiquity of passively learning from the structured information in the environment, the mechanisms that support learning from real-world experience are largely unknown. By combing sophisticated cognitive tasks in human and rats, neuronal measurements and perturbations in rat and network modelling, we aim to build a multi-level description of how sensory history is utilised in inferring regularities in temporally extended tasks. In this talk, I will specifically focus on a comparative rat and human model, in combination with neural network models to study how past sensory experiences are utilized to impact working memory and decision making behaviours.
Learning and updating structured knowledge
During our everyday lives, much of what we experience is familiar and predictable. We typically follow the same morning routine, take the same route to work, and encounter the same colleagues. However, every once in a while, we encounter a surprising event that violates our expectations. When we encounter such violations of our expectations, it is adaptive to update our internal model of the world in order to make better predictions in the future. The hippocampus is thought to support both the learning of the predictable structure of our environment, as well as the detection and encoding of violations. However, the hippocampus is a complex and heterogeneous structure, composed of different subfields that are thought to subserve different functions. As such, it is not yet known how the hippocampus accomplishes the learning and updating of structured knowledge. Using behavioral methods and high-resolution fMRI, I'll show that during learning of repeated and predicted events, hippocampal subfields differentially integrate and separate event representations, thus learning the structure of ongoing experience. I then move on to discuss how when events violate our predictions, there is a shift in communication between hippocampal subfields, potentially allowing for efficient encoding of the novel and surprising information. If time permits, I'll present an additional behavioral study showing that violations of predictions promote detailed memories. Together, these studies advance our understanding of how we adaptively learn and update our knowledge.
Active sleep in flies: the dawn of consciousness
The brain is a prediction machine. Yet the world is never entirely predictable, for any animal. Unexpected events are surprising and this typically evokes prediction error signatures in animal brains. In humans such mismatched expectations are often associated with an emotional response as well. Appropriate emotional responses are understood to be important for memory consolidation, suggesting that valence cues more generally constitute an ancient mechanism designed to potently refine and generalize internal models of the world and thereby minimize prediction errors. On the other hand, abolishing error detection and surprise entirely is probably also maladaptive, as this might undermine the very mechanism that brains use to become better prediction machines. This paradoxical view of brain functions as an ongoing tug-of-war between prediction and surprise suggests a compelling new way to study and understand the evolution of consciousness in animals. I will present approaches to studying attention and prediction in the tiny brain of the fruit fly, Drosophila melanogaster. I will discuss how an ‘active’ sleep stage (termed rapid eye movement – REM – sleep in mammals) may have evolved in the first animal brains as a mechanism for optimizing prediction in motile creatures confronted with constantly changing environments. A role for REM sleep in emotional regulation could thus be better understood as an ancient sleep function that evolved alongside selective attention to maintain an adaptive balance between prediction and surprise. This view of active sleep has some interesting implications for the evolution of subjective awareness and consciousness.
A role for cognitive maps in metaphors and analogy?
In human and non-human animals, conceptual knowledge is partially organized according to low-dimensional geometries that rely on brain structures and computations involved in spatial representations. Recently, two separate lines of research have investigated cognitive maps, that are associated with the hippocampal formation and are similar to world-centered representations of the environment, and image spaces, that are associated with the parietal cortex and are similar to self-centered spatial relationships. I will suggest that cognitive maps and image spaces may be two manifestations of a more general propensity of the mind to create low-dimensional internal models, and may play a role in analogical reasoning and metaphorical thinking. Finally, I will show some data suggesting that the metaphorical relationship between colors and emotions can be accounted for by the structural alignment of low-dimensional conceptual spaces.
Sparse expansion in cerebellum favours learning speed and performance in the context of motor control
The cerebellum contains more than half of the brain’s neurons and it is essential for motor control. Its neural circuits have a distinctive architecture comprised of a large, sparse expansion from the input mossy fibres to the granule cell layer. For years, theories of how cerebellar architectural features relate to cerebellar function have been formulated. It has been shown that some of these features can facilitate pattern separation. However, these theories don’t consider the need for it to learn fast in order to control smooth and accurate movements. Here, we confront this gap. This talk will show that the expansion to the granule cell layer in the cerebellar cortex improves learning speed and performance in the context of motor control by considering a cerebellar-like network learning an internal model of a motor apparatus online. By expressing the general form of the learning rate for such a system, this talk will provide a calculation of how increasing the number of granule cells diminishes the effect of noise and increases the learning speed. The researchers propose that the particular architecture of cerebellar circuits modifies the geometry of the error function in a favourable way for learning faster. Their results illuminate a new link between cerebellar structure and function.
Human voluntary action: from thought to movement
The ability to decide and act autonomously is a distinctive feature of human cognition. From a motor neurophysiology viewpoint, these 'voluntary' actions can be distinguished by the lack of an obvious triggering sensory stimulus: the action is considered to be a product of thought, rather than a reflex result of a specific input. A reverse engineering approach shows that such actions are caused by neurons of the primary cortex, which in turn depend on medial frontal areas, and finally a combination of prefrontal cortical connections and subcortical drive from basal ganglia loops. One traditional marker of voluntary action is the EEG readiness potential (RP), recorded over the frontal cortex prior to voluntary actions. However, the interpretation of this signal remains controversial, and very few experimental studies have attempted to link the RP to the thought process that lead to voluntary action. In this talk, I will report new studies that show learning an internal model about the optimum delay at which to act influences the amplitude of the RP. More generally, a scientific understanding of voluntariness and autonomy will require new neurocognitive paradigms connecting thought and action.
Rational thoughts in neural codes
First, we describe a new method for inferring the mental model of an animal performing a natural task. We use probabilistic methods to compute the most likely mental model based on an animal’s sensory observations and actions. This also reveals dynamic beliefs that would be optimal according to the animal’s internal model, and thus provides a practical notion of “rational thoughts.” Second, we construct a neural coding framework by which these rational thoughts, their computational dynamics, and actions can be identified within the manifold of neural activity. We illustrate the value of this approach by training an artificial neural network to perform a generalization of a widely used foraging task. We analyze the network’s behaviour to find rational thoughts, and successfully recover the neural properties that implemented those thoughts, providing a way of interpreting the complex neural dynamics of the artificial brain. Joint work with Zhengwei Wu, Minhae Kwon, Saurabh Daptardar, and Paul Schrater.
Building internal models during periods of rest and sleep
Bernstein Conference 2024
Self-generated vestibular prosthetic input updates forward internal model of self-motion
COSYNE 2023