Internal
internal representations
Hebbian Plasticity Supports Predictive Self-Supervised Learning of Disentangled Representations
Discriminating distinct objects and concepts from sensory stimuli is essential for survival. Our brains accomplish this feat by forming meaningful internal representations in deep sensory networks with plastic synaptic connections. Experience-dependent plasticity presumably exploits temporal contingencies between sensory inputs to build these internal representations. However, the precise mechanisms underlying plasticity remain elusive. We derive a local synaptic plasticity model inspired by self-supervised machine learning techniques that shares a deep conceptual connection to Bienenstock-Cooper-Munro (BCM) theory and is consistent with experimentally observed plasticity rules. We show that our plasticity model yields disentangled object representations in deep neural networks without the need for supervision and implausible negative examples. In response to altered visual experience, our model qualitatively captures neuronal selectivity changes observed in the monkey inferotemporal cortex in-vivo. Our work suggests a plausible learning rule to drive learning in sensory networks while making concrete testable predictions.
Representation transfer and signal denoising through topographic modularity
To prevail in a dynamic and noisy environment, the brain must create reliable and meaningful representations from sensory inputs that are often ambiguous or corrupt. Since only information that permeates the cortical hierarchy can influence sensory perception and decision-making, it is critical that noisy external stimuli are encoded and propagated through different processing stages with minimal signal degradation. Here we hypothesize that stimulus-specific pathways akin to cortical topographic maps may provide the structural scaffold for such signal routing. We investigate whether the feature-specific pathways within such maps, characterized by the preservation of the relative organization of cells between distinct populations, can guide and route stimulus information throughout the system while retaining representational fidelity. We demonstrate that, in a large modular circuit of spiking neurons comprising multiple sub-networks, topographic projections are not only necessary for accurate propagation of stimulus representations, but can also help the system reduce sensory and intrinsic noise. Moreover, by regulating the effective connectivity and local E/I balance, modular topographic precision enables the system to gradually improve its internal representations and increase signal-to-noise ratio as the input signal passes through the network. Such a denoising function arises beyond a critical transition point in the sharpness of the feed-forward projections, and is characterized by the emergence of inhibition-dominated regimes where population responses along stimulated maps are amplified and others are weakened. Our results indicate that this is a generalizable and robust structural effect, largely independent of the underlying model specificities. Using mean-field approximations, we gain deeper insight into the mechanisms responsible for the qualitative changes in the system’s behavior and show that these depend only on the modular topographic connectivity and stimulus intensity. The general dynamical principle revealed by the theoretical predictions suggest that such a denoising property may be a universal, system-agnostic feature of topographic maps, and may lead to a wide range of behaviorally relevant regimes observed under various experimental conditions: maintaining stable representations of multiple stimuli across cortical circuits; amplifying certain features while suppressing others (winner-take-all circuits); and endow circuits with metastable dynamics (winnerless competition), assumed to be fundamental in a variety of tasks.
Learning to see Stuff
Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. How does the brain learn to see properties of the world—like the glossiness of a surface—that cannot be measured by any other senses? Recent advances in unsupervised deep learning may help shed light on material perception. I will show how an unsupervised deep neural network trained on an artificial environment of surfaces that have different shapes, materials and lighting, spontaneously comes to encode those factors in its internal representations. Most strikingly, the model makes patterns of errors in its perception of material that follow, on an image-by-image basis, the patterns of errors made by human observers. Unsupervised deep learning may provide a coherent framework for how many perceptual dimensions form, in material perception and beyond.
Understanding Perceptual Priors with Massive Online Experiments
One of the most important questions in psychology and neuroscience is understanding how the outside world maps to internal representations. Classical psychophysics approaches to this problem have a number of limitations: they mostly study low dimensional perpetual spaces, and are constrained in the number and diversity of participants and experiments. As ecologically valid perception is rich, high dimensional, contextual, and culturally dependent, these impediments severely bias our understanding of perceptual representations. Recent technological advances—the emergence of so-called “Virtual Labs”— can significantly contribute toward overcoming these barriers. Here I present a number of specific strategies that my group has developed in order to probe representations across a number of dimensions. 1) Massive online experiments can increase significantly the amount of participants and experiments that can be carried out in a single study, while also significantly diversifying the participant pool. We have developed a platform, PsyNet, that enables “experiments as code,” whereby the orchestration of computer servers, recruiting, compensation of participants, and data management is fully automated and every experiment can be fully replicated with one command line. I will demonstrate how PsyNet allows us to recruit thousands of participants for each study with a large number of control experimental conditions, significantly increasing our understanding of auditory perception. 2) Virtual lab methods also enable us to run experiments that are nearly impossible in a traditional lab setting. I will demonstrate our development of adaptive sampling, a set of behavioural methods that combine machine learning sampling techniques (Monte Carlo Markov Chains) with human interactions and allow us to create high-dimensional maps of perceptual representations with unprecedented resolution. 3) Finally, I will demonstrate how the aforementioned methods can be applied to the study of perceptual priors in both audition and vision, with a focus on our work in cross-cultural research, which studies how perceptual priors are influenced by experience and culture in diverse samples of participants from around the world.
Making memories in mice
Understanding how the brain uses information is a fundamental goal of neuroscience. Several human disorders (ranging from autism spectrum disorder to PTSD to Alzheimer’s disease) may stem from disrupted information processing. Therefore, this basic knowledge is not only critical for understanding normal brain function, but also vital for the development of new treatment strategies for these disorders. Memory may be defined as the retention over time of internal representations gained through experience, and the capacity to reconstruct these representations at later times. Long-lasting physical brain changes (‘engrams’) are thought to encode these internal representations. The concept of a physical memory trace likely originated in ancient Greece, although it wasn’t until 1904 that Richard Semon first coined the term ‘engram’. Despite its long history, finding a specific engram has been challenging, likely because an engram is encoded at multiple levels (epigenetic, synaptic, cell assembly). My lab is interested in understanding how specific neurons are recruited or allocated to an engram, and how neuronal membership in an engram may change over time or with new experience. Here I will describe both older and new unpublished data in our efforts to understand memories in mice.
Extracting heading and goal through structured action
Many flexible behaviors are thought to rely on internal representations of an animal’s spatial relationship to its environment and of the consequences of its actions in that environment. While such representations—e.g. of head direction and value—have been extensively studied, how they are combined to guide behavior is not well understood. I will discuss how we are exploring these questions using a classical visual learning paradigm for the fly. I’ll begin by describing a simple policy that, when tethered to an internal representation of heading, captures structured behavioral variability in this task. I’ll describe how ambiguities in the fly’s visual surroundings affect its perception and, when coupled to this policy, manifest in predictable changes in behavior. Informed by newly-released connectomic data, I’ll then discuss how these computations might be carried out and combined within specific circuits in the fly’s central brain, and how perception and action might interact to shape individual differences in learning performance.
A genetic algorithm to uncover internal representations in biological and artificial brains
COSYNE 2022
A genetic algorithm to uncover internal representations in biological and artificial brains
COSYNE 2022
Not so griddy: Internal representations of RNNs path integrating more than one agent
COSYNE 2025