Building Blocks
building blocks
Single-neuron correlates of perception and memory in the human medial temporal lobe
The human medial temporal lobe contains neurons that respond selectively to the semantic contents of a presented stimulus. These "concept cells" may respond to very different pictures of a given person and even to their written or spoken name. Their response latency is far longer than necessary for object recognition, they follow subjective, conscious perception, and they are found in brain regions that are crucial for declarative memory formation. It has thus been hypothesized that they may represent the semantic "building blocks" of episodic memories. In this talk I will present data from single unit recordings in the hippocampus, entorhinal cortex, parahippocampal cortex, and amygdala during paradigms involving object recognition and conscious perception as well as encoding of episodic memories in order to characterize the role of concept cells in these cognitive functions.
Associative memory of structured knowledge
A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.
Magnetic Handshake Materials
Biological materials gain complexity from the programmable nature of their components. To manufacture materials with comparable complexity synthetically, we need to create building blocks with low crosstalk so that they only bind to their desired partners. Canonically, these building blocks are made using DNA strands or proteins to achieve specificity. Here we propose a new materials platform, termed Magnetic Handshake Materials, in which we program interactions through designing magnetic dipole patterns. This is a completely synthetic platform, enabled by magnetic printing technology, which is easier to both model theoretically and control experimentally. In this seminar, I will give an overview of the development of the Magnetic Handshake Materials platform, ranging from interaction, assembly to function design.
Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems
Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.
Brain Basics: A peak into the Brain!
My talk will be a ’Neuro 101’ - also called ‘Basics of Neuroscience’. I hope to introduce the field of Neuroscience and give a brief glimpse into the function, history and evolution of the brain. I will guide you through questions such as - What is a brain? What are its basic building blocks and functions?
Playing StarCraft and saving the world using multi-agent reinforcement learning!
This is my C-14 Impaler gauss rifle! There are many like it, but this one is mine!" - A terran marine If you have never heard of a terran marine before, then you have probably missed out on playing the very engaging and entertaining strategy computer game, StarCraft. However, don’t despair, because what we have in store might be even more exciting! In this interactive session, we will take you through, step-by-step, on how to train a team of terran marines to defeat a team of marines controlled by the built-in game AI in StarCraft II. How will we achieve this? Using multi-agent reinforcement learning (MARL). MARL is a useful framework for building distributed intelligent systems. In MARL, multiple agents are trained to act as individual decision-makers of some larger system, while learning to work as a team. We will show you how to use Mava (https://github.com/instadeepai/Mava), a newly released research framework for MARL to build a multi-agent learning system for StarCraft II. We will provide the necessary guidance, tools and background to understand the key concepts behind MARL, how to use Mava building blocks to build systems and how to train a system from scratch. We will conclude the session by briefly sharing various exciting real-world application areas for MARL at InstaDeep, such as large-scale autonomous train navigation and circuit board routing. These are problems that become exponentially more difficult to solve as they scale. Finally, we will argue that many of humanity’s most important practical problems are reminiscent of the ones just described. These include, for example, the need for sustainable management of distributed resources under the pressures of climate change, or efficient inventory control and supply routing in critical distribution networks, or robotic teams for rescue missions and exploration. We believe MARL has enormous potential to be applied in these areas and we hope to inspire you to get excited and interested in MARL and perhaps one day contribute to the field!
Gap Junction Coupling between Photoreceptors
Simply put, the goal of my research is to describe the neuronal circuitry of the retina. The organization of the mammalian retina is certainly complex but it is not chaotic. Although there are many cell types, most adhere to a relatively constant morphology and they are distributed in non-random mosaics. Furthermore, each cell type ramifies at a characteristic depth in the retina and makes a stereotyped set of synaptic connections. In other words, these neurons form a series of local circuits across the retina. The next step is to identify the simplest and commonest of these repeating neural circuits. They are the building blocks of retinal function. If we think of it in this way, the retina is a fabulous model for the rest of the CNS. We are interested in identifying specific circuits and cell types that support the different functions of the retina. For example, there appear to be specific pathways for rod and cone mediated vision. Rods are used under low light conditions and rod circuitry is specialized for high sensitivity when photons are scarce (when you’re out camping, starlight). The hallmark of the rod-mediated system is monochromatic vision. In contrast, the cone circuits are specialized for high acuity and color vision under relatively bright or daylight conditions. Individual neurons may be filled with fluorescent dyes under visual control. This is achieved by impaling the cell with a glass microelectrode using a 3D micromanipulator. We are also interested in the diffusion of dye through coupled neuronal networks in the retina. The dye filled cells are also combined with antibody labeling to reveal neuronal connections and circuits. This triple-labeled material may be viewed and reconstructed in 3 dimensions by multi-channel confocal microscopy. We have our own confocal microscope facility in the department and timeslots are available to students in my lab.
Contrasting neuronal circuits driving reactive and cognitive fear
The last decade in the field of neuroscience has been marked by intense debate on the meaning of the term fear. Whereas some have argued that fear (as well as other emotions) relies on cognitive capacities that are unique to humans, others view it as a negative state constructed from essential building blocks. This latter definition posits that fear states are associated with varying readouts that one could consider to be parallel processes or serial events tied to a specific hierarchy. Within this framework, innate defensive behaviors are considered to be common displays of fear states that lie under the control of hard-wired brain circuits. As a general rule, these defensive behaviors can be classified as either reactive or cognitive based on a thread imminence continuum. However, while evidence of the neuronal circuits that lead to these divergent behavioral strategies has accrued over the last decades, most literature has considered these responses in isolation. As a result, important misconceptions have arisen regarding how fear circuits are distributed in the brain and the contribution of specific nodes within these circuits to defensive behaviors. To mitigate the status quo, I will conduct a systematic comparison of brain circuits driving the expression of freezing and active avoidance behavior, which I will use as well-studied proxies of reactive and cognitive fear, respectively. In addition, I propose that by integrating associative information with interoceptive and exteroceptive signals the central nucleus of the amygdala plays a crucial role in biasing the selection of defensive behaviors.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands to tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics. As an example, we focus on the ability to execute learned actions in a reliable and stable manner. We hypothesize that the ability to perform a given behavior in a consistent manner requires that the latent dynamics underlying the behavior also be stable. The stable latent dynamics, once identified, allows for the prediction of various behavioral features, using models whose parameters remain fixed throughout long timespans. We posit that latent cortical dynamics within the manifold are the fundamental and stable building blocks underlying consistent behavioral execution.
A fresh look at the bird retina
I am working on the vertebrate retina, with a main focus on the mouse and bird retina. Currently my work is focused on three major topics: Functional and molecular analysis of electrical synapses in the retina Circuitry and functional role of retinal interneurons: horizontal cells Circuitry for light-dependent magnetoreception in the bird retina Electrical synapses Electrical synapses (gap junctions) permit fast transmission of electrical signals and passage of metabolites by means of channels, which directly connect the cytoplasm of adjoining cells. A functional gap junction channel consists of two hemichannels (one provided by each of the cells), each comprised of a set of six protein subunits, termed connexins. These building blocks exist in a variety of different subtypes, and the connexin composition determines permeability and gating properties of a gap junction channel, thereby enabling electrical synapses to meet a diversity of physiological requirements. In the retina, various connexins are expressed in different cell types. We study the cellular distribution of different connexins as well as the modulation induced by transmitter action or change of ambient light levels, which leads to altered electrical coupling properties. We are also interested in exploiting them as therapeutic avenue for retinal degeneration diseases. Horizontal cells Horizontal cells receive excitatory input from photoreceptors and provide feedback inhibition to photoreceptors and feedforward inhibition to bipolar cells. Because of strong electrical coupling horizontal cells integrate the photoreceptor input over a wide area and are thought to contribute to the antagonistic organization of bipolar cell and ganglion cell receptive fields and to tune the photoreceptor–bipolar cell synapse with respect to the ambient light conditions. However, the extent to which this influence shapes retinal output is unclear, and we aim to elucidate the functional importance of horizontal cells for retinal signal processing by studying various transgenic mouse models. Retinal circuitry for light-dependent magnetoreception in the bird We are studying which neuronal cell types and pathways in the bird retina are involved in the processing of magnetic signals. Likely, magnetic information is detected in cryptochrome-expressing photoreceptors and leaves the retina through ganglion cell axons that project via the thalamofugal pathway to Cluster N, a part of the visual wulst essential for the avian magnetic compass. Thus, we aim to elucidate the synaptic connections and retinal signaling pathways from putatively magnetosensitive photoreceptors to thalamus-projecting ganglion cells in migratory birds using neuroanatomical and electrophysiological techniques.
Low Dimensional Manifolds for Neural Dynamics
The ability to simultaneously record the activity from tens to thousands and maybe even tens of thousands of neurons has allowed us to analyze the computational role of population activity as opposed to single neuron activity. Recent work on a variety of cortical areas suggests that neural function may be built on the activation of population-wide activity patterns, the neural modes, rather than on the independent modulation of individual neural activity. These neural modes, the dominant covariation patterns within the neural population, define a low dimensional neural manifold that captures most of the variance in the recorded neural activity. We refer to the time-dependent activation of the neural modes as their latent dynamics, and argue that latent cortical dynamics within the manifold are the fundamental and stable building blocks of neural population activity.
Frustrated Self-Assembly of Non-Euclidean Crystals of Nanoparticles
Self-organized complex structures in nature, e.g., viral capsids, hierarchical biopolymers, and bacterial flagella, offer efficiency, adaptability, robustness, and multi-functionality. Can we program the self-assembly of three-dimensional (3D) complex structures using simple building blocks, and reach similar or higher level of sophistication in engineered materials? Here we present an analytic theory for the self-assembly of polyhedral nanoparticles (NPs) based on their crystal structures in non-Euclidean space. We show that the unavoidable geometrical frustration of these particle shapes, combined with competing attractive and repulsive interparticle interactions, lead to controllable self-assembly of structures of complex order. Applying this theory to tetrahedral NPs, we find high-yield and enantiopure self-assembly of helicoidal ribbons, exhibiting qualitative agreement with experimental observations. We expect that this theory will offer a general framework for the self-assembly of simple polyhedral building blocks into rich complex morphologies with new material capabilities such as tunable optical activity, essential for multiple emerging technologies.
Cortical networks for flexible decisions during spatial navigation
My lab seeks to understand how the mammalian brain performs the computations that underlie cognitive functions, including decision-making, short-term memory, and spatial navigation, at the level of the building blocks of the nervous system, cell types and neural populations organized into circuits. We have developed methods to measure, manipulate, and analyze neural circuits across various spatial and temporal scales, including technology for virtual reality, optical imaging, optogenetics, intracellular electrophysiology, molecular sensors, and computational modeling. I will present recent work that uses large scale calcium imaging to reveal the functional organization of the mouse posterior cortex for flexible decision-making during spatial navigation in virtual reality. I will also discuss work that uses optogenetics and calcium imaging during a variety of decision-making tasks to highlight how cognitive experience and context greatly alter the cortical circuits necessary for navigation decisions.
Mechanical properties of our unstable protein building blocks
Multistable structures - from deployable structures to robots
Multistable structures can reversibly change between multiple stable configurations when a sufficient energetic input is provided. While originally the field focused on understanding what governs the snapping, more recently it has been shown that these systems also provide a powerful platform to design a wide range of smart structures. In this talk, I will first show that pressure-deployable origami structures characterized by two stable configurations provide opportunities for a new generation of large-scale inflatable structures that lock in place after deployment and provide a robust enclosure through their rigid faces. Then, I will demonstrate that the propagation of transition waves in a bistable one-dimensional linkage can be exploited as a robust mechanism to realize structures that can be quickly deployed. Finally, while in the first two examples multistability is harnessed to realize deployable architectures, I will demonstrate that bistable building blocks can also be exploited to design crawling and jumping robots. Unlike previously proposed robots that require complex input control of multiple actuators, a simple, slow input signal suffices to make our system move, as all features required for locomotion are embedded into the architecture of the building blocks.
Monkey Talk – what studies about nonhuman primate vocal communication reveal about the evolution of speech
The evolution of speech is considered to be one of the hardest problems in science. Studies of the communicative abilities of our closest living relatives, the nonhuman primates, aim to contribute to a better understanding of the emergence of this uniquely human capability. Following a brief introduction over the key building blocks that make up the human speech faculty, I will focus on the question of meaning in nonhuman primate vocalizations. While nonhuman primate calls may be highly context specific, thus giving rise to the notion of ‘referentiality’, comparisons across closely related species suggest that this specificity is evolved rather than learned. Yet, as in humans, the structure of calls varies with arousal and affective state, and there is some evidence for effects of sensory-motor integration in vocal production. Thus, the vocal production of nonhuman primates bears little resemblance to the symbolic and combinatorial features of human speech, while basic production mechanisms are shared. Listeners, in contrast, are able learning the meaning of new sounds. A recent study using artificial predator shows that this learning may be extremely rapid. Furthermore, listeners are able to integrate information from multiple sources to make adaptive decisions, which renders the vocal communication system as a whole relatively flexible and powerful. In conclusion, constraints at the side of vocal production, including limits in social cognition and motivation to share experiences, rather than constraints at the side of the recipient explain the differences in communicative abilities between humans and other animals.
On temporal coding in spiking neural networks with alpha synaptic function
The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.
Playing the piano with the cortex: role of neuronal ensembles and pattern completion in perception
The design of neural circuits, with large numbers of neurons interconnected in vast networks, strongly suggest that they are specifically build to generate emergent functional properties (1). To explore this hypothesis, we have developed two-photon holographic methods to selective image and manipulate the activity of neuronal populations in 3D in vivo (2). Using them we find that groups of synchronous neurons (neuronal ensembles) dominate the evoked and spontaneous activity of mouse primary visual cortex (3). Ensembles can be optogenetically imprinted for several days and some of their neurons trigger the entire ensemble (4). By activating these pattern completion cells in ensembles involved in visual discrimination paradigms, we can bi-directionally alter behavioural choices (5). Our results demonstrate that ensembles are necessary and sufficient for visual perception and are consistent with the possibility that neuronal ensembles are the functional building blocks of cortical circuits. 1. R. Yuste, From the neuron doctrine to neural networks. Nat Rev Neurosci 16, 487-497 (2015). 2. L. Carrillo-Reid, W. Yang, J. E. Kang Miller, D. S. Peterka, R. Yuste, Imaging and Optically Manipulating Neuronal Ensembles. Annu Rev Biophys, 46: 271-293 (2017). 3. J. E. Miller, I. Ayzenshtat, L. Carrillo-Reid, R. Yuste, Visual stimuli recruit intrinsically generated cortical ensembles. Proceedings of the National Academy of Sciences of the United States of America 111, E4053-4061 (2014). 4. L. Carrillo-Reid, W. Yang, Y. Bando, D. S. Peterka, R. Yuste, Imprinting and recalling cortical ensembles. Science 353, 691-694 (2016). 5. L. Carrillo-Reid, S. Han, W. Yang, A. Akrouh, R. Yuste, (2019). Controlling visually-guided behaviour by holographic recalling of cortical ensembles. Cell 178, 447-457. DOI:https://doi.org/10.1016/j.cell.2019.05.045.
Neural manifolds for the stable control of movement
Animals perform learned actions with remarkable consistency for years after acquiring a skill. What is the neural correlate of this stability? We explore this question from the perspective of neural populations. Recent work suggests that the building blocks of neural function may be the activation of population-wide activity patterns: neural modes that capture the dominant co-variation patterns of population activity and define a task specific low dimensional neural manifold. The time-dependent activation of the neural modes results in latent dynamics. We hypothesize that the latent dynamics associated with the consistent execution of a behaviour need to remain stable, and use an alignment method to establish this stability. Once identified, stable latent dynamics allow for the prediction of various behavioural features via fixed decoder models. We conclude that latent cortical dynamics within the task manifold are the fundamental and stable building blocks underlying consistent behaviour.
Sparse Component Analysis: An interpretable dimensionality reduction tool that identifies building blocks of neural computation
COSYNE 2023
Sparse Component Analysis: An interpretable dimensionality reduction tool that identifies building blocks of neural computation
Neuromatch 5