Space
space
sensorimotor control, mouvement, touch, EEG
Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.
Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics
Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.
Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades
How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.
Altered grid-like coding in early blind people and the role of vision in conceptual navigation
Dimensionality reduction beyond neural subspaces
Over the past decade, neural representations have been studied from the lens of low-dimensional subspaces defined by the co-activation of neurons. However, this view has overlooked other forms of covarying structure in neural activity, including i) condition-specific high-dimensional neural sequences, and ii) representations that change over time due to learning or drift. In this talk, I will present a new framework that extends the classic view towards additional types of covariability that are not constrained to a fixed, low-dimensional subspace. In addition, I will present sliceTCA, a new tensor decomposition that captures and demixes these different types of covariability to reveal task-relevant structure in neural activity. Finally, I will close with some thoughts regarding the circuit mechanisms that could generate mixed covariability. Together this work points to a need to consider new possibilities for how neural populations encode sensory, cognitive, and behavioral variables beyond neural subspaces.
Screen Savers : Protecting adolescent mental health in a digital world
In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.
Understanding the complex behaviors of the ‘simple’ cerebellar circuit
Every movement we make requires us to precisely coordinate muscle activity across our body in space and time. In this talk I will describe our efforts to understand how the brain generates flexible, coordinated movement. We have taken a behavior-centric approach to this problem, starting with the development of quantitative frameworks for mouse locomotion (LocoMouse; Machado et al., eLife 2015, 2020) and locomotor learning, in which mice adapt their locomotor symmetry in response to environmental perturbations (Darmohray et al., Neuron 2019). Combined with genetic circuit dissection, these studies reveal specific, cerebellum-dependent features of these complex, whole-body behaviors. This provides a key entry point for understanding how neural computations within the highly stereotyped cerebellar circuit support the precise coordination of muscle activity in space and time. Finally, I will present recent unpublished data that provide surprising insights into how cerebellar circuits flexibly coordinate whole-body movements in dynamic environments.
Optogenetic control of Nodal signaling patterns
Embryos issue instructions to their cells in the form of patterns of signaling activity. Within these patterns, the distribution of signaling in time and space directs the fate of embryonic cells. Tools to perturb developmental signaling with high resolution in space and time can help reveal how these patterns are decoded to make appropriate fate decisions. In this talk, I will present new optogenetic reagents and an experimental pipeline for creating designer Nodal signaling patterns in live zebrafish embryos. Our improved optoNodal reagents eliminate dark activity and improve response kinetics, without sacrificing dynamic range. We adapted an ultra-widefield microscopy platform for parallel light patterning in up to 36 embryos and demonstrated precise spatial control over Nodal signaling activity and downstream gene expression. Using this system, we demonstrate that patterned Nodal activation can initiate specification and internalization movements of endodermal precursors. Further, we used patterned illumination to generate synthetic signaling patterns in Nodal signaling mutants, rescuing several characteristic developmental defects. This study establishes an experimental toolkit for systematic exploration of Nodal signaling patterns in live embryos.
Visuomotor learning of location, action, and prediction
Navigating semantic spaces: recycling the brain GPS for higher-level cognition
Humans share with other animals a complex neuronal machinery that evolved to support navigation in the physical space and that supports wayfinding and path integration. In my talk I will present a series of recent neuroimaging studies in humans performed in my Lab aimed at investigating the idea that this same neural navigation system (the “brain GPS”) is also used to organize and navigate concepts and memories, and that abstract and spatial representations rely on a common neural fabric. I will argue that this might represent a novel example of “cortical recycling”, where the neuronal machinery that primarily evolved, in lower level animals, to represent relationships between spatial locations and navigate space, in humans are reused to encode relationships between concepts in an internal abstract representational space of meaning.
Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory
Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM
Neuronal population interactions between brain areas
Most brain functions involve interactions among multiple, distinct areas or nuclei. Yet our understanding of how populations of neurons in interconnected brain areas communicate is in its infancy. Using a population approach, we found that interactions between early visual cortical areas (V1 and V2) occur through a low-dimensional bottleneck, termed a communication subspace. In this talk, I will focus on the statistical methods we have developed for studying interactions between brain areas. First, I will describe Delayed Latents Across Groups (DLAG), designed to disentangle concurrent, bi-directional (i.e., feedforward and feedback) interactions between areas. Second, I will describe an extension of DLAG applicable to three or more areas, and demonstrate its utility for studying simultaneous Neuropixels recordings in areas V1, V2, and V3. Our results provide a framework for understanding how neuronal population activity is gated and selectively routed across brain areas.
Neural Mechanisms of Subsecond Temporal Encoding in Primary Visual Cortex
Subsecond timing underlies nearly all sensory and motor activities across species and is critical to survival. While subsecond temporal information has been found across cortical and subcortical regions, it is unclear if it is generated locally and intrinsically or if it is a read out of a centralized clock-like mechanism. Indeed, mechanisms of subsecond timing at the circuit level are largely obscure. Primary sensory areas are well-suited to address these question as they have early access to sensory information and provide minimal processing to it: if temporal information is found in these regions, it is likely to be generated intrinsically and locally. We test this hypothesis by training mice to perform an audio-visual temporal pattern sensory discrimination task as we use 2-photon calcium imaging, a technique capable of recording population level activity at single cell resolution, to record activity in primary visual cortex (V1). We have found significant changes in network dynamics through mice’s learning of the task from naive to middle to expert levels. Changes in network dynamics and behavioral performance are well accounted for by an intrinsic model of timing in which the trajectory of q network through high dimensional state space represents temporal sensory information. Conversely, while we found evidence of other temporal encoding models, such as oscillatory activity, we did not find that they accounted for increased performance but were in fact correlated with the intrinsic model itself. These results provide insight into how subsecond temporal information is encoded mechanistically at the circuit level.
Prefrontal mechanisms involved in learning distractor-resistant working memory in a dual task
Working memory (WM) is a cognitive function that allows the short-term maintenance and manipulation of information when no longer accessible to the senses. It relies on temporarily storing stimulus features in the activity of neuronal populations. To preserve these dynamics from distraction it has been proposed that pre and post-distraction population activity decomposes into orthogonal subspaces. If orthogonalization is necessary to avoid WM distraction, it should emerge as performance in the task improves. We sought evidence of WM orthogonalization learning and the underlying mechanisms by analyzing calcium imaging data from the prelimbic (PrL) and anterior cingulate (ACC) cortices of mice as they learned to perform an olfactory dual task. The dual task combines an outer Delayed Paired-Association task (DPA) with an inner Go-NoGo task. We examined how neuronal activity reflected the process of protecting the DPA sample information against Go/NoGo distractors. As mice learned the task, we measured the overlap between the neural activity onto the low-dimensional subspaces that encode sample or distractor odors. Early in the training, pre-distraction activity overlapped with both sample and distractor subspaces. Later in the training, pre-distraction activity was strictly confined to the sample subspace, resulting in a more robust sample code. To gain mechanistic insight into how these low-dimensional WM representations evolve with learning we built a recurrent spiking network model of excitatory and inhibitory neurons with low-rank connections. The model links learning to (1) the orthogonalization of sample and distractor WM subspaces and (2) the orthogonalization of each subspace with irrelevant inputs. We validated (1) by measuring the angular distance between the sample and distractor subspaces through learning in the data. Prediction (2) was validated in PrL through the photoinhibition of ACC to PrL inputs, which induced early-training neural dynamics in well-trained animals. In the model, learning drives the network from a double-well attractor toward a more continuous ring attractor regime. We tested signatures for this dynamical evolution in the experimental data by estimating the energy landscape of the dynamics on a one-dimensional ring. In sum, our study defines network dynamics underlying the process of learning to shield WM representations from distracting tasks.
Multisensory integration in peripersonal space (PPS) for action, perception and consciousness
Note the later time in the USA!
The Geometry of Decision-Making
Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Epigenetic rewiring in Schinzel-Giedion syndrome
During life, a variety of specialized cells arise to grant the right and timely corrected functions of tissues and organs. Regulation of chromatin in defining specialized genomic regions (e.g. enhancers) plays a key role in developmental transitions from progenitors into cell lineages. These enhancers, properly topologically positioned in 3D space, ultimately guide the transcriptional programs. It is becoming clear that several pathologies converge in differential enhancer usage with respect to physiological situations. However, why some regulatory regions are physiologically preferred, while some others can emerge in certain conditions, including other fate decisions or diseases, remains obscure. Schinzel-Giedion syndrome (SGS) is a rare disease with symptoms such as severe developmental delay, congenital malformations, progressive brain atrophy, intractable seizures, and infantile death. SGS is caused by mutations in the SETBP1 gene that results in its accumulation further leading to the downstream accumulation of SET. The oncoprotein SET has been found as part of the histone chaperone complex INHAT that blocks the activity of histone acetyltransferases suggesting that SGS may (i) represent a natural model of alternative chromatin regulation and (ii) offer chances to study downstream (mal)adaptive mechanisms. I will present our work on the characterization of SGS in appropriate experimental models including iPSC-derived cultures and mouse.
Beyond Volition
Voluntary actions are actions that agents choose to make. Volition is the set of cognitive processes that implement such choice and initiation. These processes are often held essential to modern societies, because they form the cognitive underpinning for concepts of individual autonomy and individual responsibility. Nevertheless, psychology and neuroscience have struggled to define volition, and have also struggled to study it scientifically. Laboratory experiments on volition, such as those of Libet, have been criticised, often rather naively, as focussing exclusively on meaningless actions, and ignoring the factors that make voluntary action important in the wider world. In this talk, I will first review these criticisms, and then look at extending scientific approaches to volition in three directions that may enrich scientific understanding of volition. First, volition becomes particularly important when the range of possible actions is large and unconstrained - yet most experimental paradigms involve minimal response spaces. We have developed a novel paradigm for eliciting de novo actions through verbal fluency, and used this to estimate the elusive conscious experience of generativity. Second, volition can be viewed as a mechanism for flexibility, by promoting adaptation of behavioural biases. This view departs from the tradition of defining volition by contrasting internally-generated actions with externally-triggered actions, and instead links volition to model-based reinforcement learning. By using the context of competitive games to re-operationalise the classic Libet experiment, we identified a form of adaptive autonomy that allows agents to reduce biases in their action choices. Interestingly, this mechanism seems not to require explicit understanding and strategic use of action selection rules, in contrast to classical ideas about the relation between volition and conscious, rational thought. Third, I will consider volition teleologically, as a mechanism for achieving counterfactual goals through complex problem-solving. This perspective gives a key role in mediating between understanding and planning on the one hand, and instrumental action on the other hand. Taken together, these three cognitive phenomena of generativity, flexibility, and teleology may partly explain why volition is such an important cognitive function for organisation of human behaviour and human flourishing. I will end by discussing how this enriched view of volition can relate to individual autonomy and responsibility.
Are place cells just memory cells? Probably yes
Neurons in the rodent hippocampus appear to encode the position of the animal in physical space during movement. Individual ``place cells'' fire in restricted sub-regions of an environment, a feature often taken as evidence that the hippocampus encodes a map of space that subserves navigation. But these same neurons exhibit complex responses to many other variables that defy explanation by position alone, and the hippocampus is known to be more broadly critical for memory formation. Here we elaborate and test a theory of hippocampal coding which produces place cells as a general consequence of efficient memory coding. We constructed neural networks that actively exploit the correlations between memories in order to learn compressed representations of experience. Place cells readily emerged in the trained model, due to the correlations in sensory input between experiences at nearby locations. Notably, these properties were highly sensitive to the compressibility of the sensory environment, with place field size and population coding level in dynamic opposition to optimally encode the correlations between experiences. The effects of learning were also strongly biphasic: nearby locations are represented more similarly following training, while locations with intermediate similarity become increasingly decorrelated, both distance-dependent effects that scaled with the compressibility of the input features. Using virtual reality and 2-photon functional calcium imaging in head-fixed mice, we recorded the simultaneous activity of thousands of hippocampal neurons during virtual exploration to test these predictions. Varying the compressibility of sensory information in the environment produced systematic changes in place cell properties that reflected the changing input statistics, consistent with the theory. We similarly identified representational plasticity during learning, which produced a distance-dependent exchange between compression and pattern separation. These results motivate a more domain-general interpretation of hippocampal computation, one that is naturally compatible with earlier theories on the circuit's importance for episodic memory formation. Work done in collaboration with James Priestley, Lorenzo Posani, Marcus Benna, Attila Losonczy.
Spatially-embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings
Brain networks exist within the confines of resource limitations. As a result, a brain network must overcome metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing. To observe the effect of these processes, we introduce the spatially-embedded recurrent neural network (seRNN). seRNNs learn basic task-related inferences while existing within a 3D Euclidean space, where the communication of constituent neurons is constrained by a sparse connectome. We find that seRNNs, similar to primate cerebral cortices, naturally converge on solving inferences using modular small-world networks, in which functionally similar units spatially configure themselves to utilize an energetically-efficient mixed-selective code. As all these features emerge in unison, seRNNs reveal how many common structural and functional brain motifs are strongly intertwined and can be attributed to basic biological optimization processes. seRNNs can serve as model systems to bridge between structural and functional research communities to move neuroscientific understanding forward.
Automated generation of face stimuli: Alignment, features and face spaces
I describe a well-tested Python module that does automated alignment and warping of faces images, and some advantages over existing solutions. An additional tool I’ve developed does automated extraction of facial features, which can be used in a number of interesting ways. I illustrate the value of wavelet-based features with a brief description of 2 recent studies: perceptual in-painting, and the robustness of the whole-part advantage across a large stimulus set. Finally, I discuss the suitability of various deep learning models for generating stimuli to study perceptual face spaces. I believe those interested in the forensic aspects of face perception may find this talk useful.
Implications of Vector-space models of Relational Concepts
Vector-space models are used frequently to compare similarity and dimensionality among entity concepts. What happens when we apply these models to relational concepts? What is the evidence that such models do apply to relational concepts? If we use such a model, then one implication is that maximizing surface feature variation should improve relational concept learning. For example, in STEM instruction, the effectiveness of teaching by analogy is often limited by students’ focus on superficial features of the source and target exemplars. However, in contrast to the prediction of the vector-space computational model, the strategy of progressive alignment (moving from perceptually similar to different targets) has been suggested to address this issue (Gentner & Hoyos, 2017), and human behavioral evidence has shown benefits from progressive alignment. Here I will present some preliminary data that supports the computational approach. Participants were explicitly instructed to match stimuli based on relations while perceptual similarity of stimuli varied parametrically. We found that lower perceptual similarity reduced accurate relational matching. This finding demonstrates that perceptual similarity may interfere with relational judgements, but also hints at why progressive alignment maybe effective. These are preliminary, exploratory data and I to hope receive feedback on the framework and to start a discussion in a group on the utility of vector-space models for relational concepts in general.
Geometry of concept learning
Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.
Convex neural codes in recurrent networks and sensory systems
Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
Modeling shared and variable information encoded in fine-scale cortical topographies
Information is encoded in fine-scale functional topographies that vary from brain to brain. Hyperalignment models information that is shared across brain in a high-dimensional common information space. Hyperalignment transformations project idiosyncratic individual topographies into the common model information space. These transformations contain topographic basis functions, affording estimates of how shared information in the common model space is instantiated in the idiosyncratic functional topographies of individual brains. This new model of the functional organization of cortex – as multiplexed, overlapping basis functions – captures the idiosyncratic conformations of both coarse-scale topographies, such as retinotopy and category-selectivity, and fine-scale topographies. Hyperalignment also makes it possible to investigate how information that is encoded in fine-scale topographies differs across brains. These individual differences in fine-grained cortical function were not accessible with previous methods.
Flexible selection of task-relevant features through population gating
Brains can gracefully weed out irrelevant stimuli to guide behavior. This feat is believed to rely on a progressive selection of task-relevant stimuli across the cortical hierarchy, but the specific across-area interactions enabling stimulus selection are still unclear. Here, we propose that population gating, occurring within A1 but controlled by top-down inputs from mPFC, can support across-area stimulus selection. Examining single-unit activity recorded while rats performed an auditory context-dependent task, we found that A1 encoded relevant and irrelevant stimuli along a common dimension of its neural space. Yet, the relevant stimulus encoding was enhanced along an extra dimension. In turn, mPFC encoded only the stimulus relevant to the ongoing context. To identify candidate mechanisms for stimulus selection within A1, we reverse-engineered low-rank RNNs trained on a similar task. Our analyses predicted that two context-modulated neural populations gated their preferred stimulus in opposite contexts, which we confirmed in further analyses of A1. Finally, we show in a two-region RNN how population gating within A1 could be controlled by top-down inputs from PFC, enabling flexible across-area communication despite fixed inter-areal connectivity.
Neural Dynamics of Cognitive Control
Cognitive control guides behavior by controlling what, where, and how information is represented in the brain. Perhaps the most well-studied form of cognitive control has been ‘attention’, which controls how external sensory stimuli are represented in the brain. In contrast, the neural mechanisms controlling the selection of representations held ‘in mind’, in working memory, are unknown. In this talk, I will present evidence that the prefrontal cortex controls working memory by selectively enhancing and transforming the contents of working memory. In particular, I will show how the neural representation of the content of working memory changes over time, moving between different ‘subspaces’ of the neural population. These dynamics may play a critical role in controlling what and how neural representations are acted upon.
A premotor amodal clock for rhythmic tapping
We recorded and analyzed the population activity of hundreds of neurons in the medial premotor areas (MPC) of rhesus monkeys performing an isochronous tapping task guided by brief flashing stimuli or auditory tones. The animals showed a strong bias towards visual metronomes, with rhythmic tapping that was more precise and accurate than for auditory metronomes. The population dynamics in state space as well as the corresponding neural sequences shared the following properties across modalities: the circular dynamics of the neural trajectories and the neural sequences formed a regenerating loop for every produced interval, producing a relative time representation; the trajectories converged in similar state space at tapping times while the moving bumps restart at this point, resetting the beat-based clock; the tempo of the synchronized tapping was encoded by a combination of amplitude modulation and temporal scaling in the neural trajectories. In addition, the modality induced a displacement in the neural trajectories in auditory and visual subspaces without greatly altering time keeping mechanism. These results suggest that the interaction between the amodal internal representation of pulse within MPC and a modality specific external input generates a neural rhythmic clock whose dynamics define the temporal execution of tapping using auditory and visual metronomes.
On the link between conscious function and general intelligence in humans and machines
In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.
Learning by Analogy in Mathematics
Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.
Universal function approximation in balanced spiking networks through convex-concave boundary composition
The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.
Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space. Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
The multimodal number sense: spanning space, time, sensory modality, and action
Humans and other animals can estimate rapidly the number of items in a scene, flashes or tones in a sequence and motor actions. Adaptation techniques provide clear evidence in humans for the existence of specialized numerosity mechanisms that make up the numbersense. This sense of number is truly general, encoding the numerosity of both spatial arrays and sequential sets, in vision and audition, and interacting strongly with action. The adaptation (cross-sensory and cross-format) acts on sensory mechanisms rather than decisional processes, pointing to a truly general sense.
Social Curiosity
In this lecture, I would like to share with the broad audience the empirical results gathered and the theoretical advancements made in the framework of the Lendület project entitled ’The cognitive basis of human sociality’. The main objective of this project was to understand the mechanisms that enable the unique sociality of humans, from the angle of cognitive science. In my talk, I will focus on recent empirical evidence in the study of three fundamental social cognitive functions (social categorization, theory of mind and social learning; mainly from the empirical lenses of developmental psychology) in order to outline a theory that emphasizes the need to consider their interconnectedness. The proposal is that the ability to represent the social world along categories and the capacity to read others’ minds are used in an integrated way to efficiently assess the epistemic states of fellow humans by creating a shared representational space. The emergence of this shared representational space is both the result of and a prerequisite to efficient learning about the physical and social environment.
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
The peripheral airways in Asthma: significance, assessment, and targeted treatment
The peripheral airways are technically challenging to assess and have been overlooked in the assessment of chronic respiratory diseases such as Asthma, in both the clinical and research space. Evidence of the importance of the small airways in Asthma is building, and small airways dysfunction is implicated in poor Asthma control, airway hyperresponsiveness, and exacerbation risk. The aim of this research was to complete comprehensive global, regional, and spatial assessments of airway function and ventilation in Asthma using physiological and MRI techniques. Specific ventilation imaging (SVI) and Phase resolved functional lung imaging (PREFUL) formed the spatial assessments. SVI uses oxygen as a contrast agent and looks at rate of change in signal to assess ventilation heterogeneity, PREFUL is a completely contrast free technique that uses Fourier decomposition to determine fractional ventilation.
Analogy and Spatial Cognition: How and Why they matter for STEM learning
Space is the universal donor for relations" (Gentner, 2014). This quote is the foundation of my talk. I will explore how and why visual representations and analogies are related, and why. I will also explore how considering the relation between analogy and spatial reasoning can shed light on why and how spatial thinking is correlated with learning in STEM fields. For example, I will consider children’s numbers sense and learning of the number line from the perspective of analogical reasoning.
Theories of consciousness: beyond the first/higher-order distinction
Theories of consciousness are commonly grouped into "first-order" and "higher-order" families. As conventional wisdom has it, many more animals are likely to be conscious if a first-order theory is correct. But two recent developments have put pressure on the first/higher-order distinction. One is the argument (from Shea and Frith) that an effective global workspace mechanism must involve a form of metacognition. The second is Lau's "perceptual reality monitoring" (PRM) theory, a member of the "higher-order" family in which conscious sensory content is not re-represented, only tagged with a temporal index and marked as reliable. I argue that the first/higher-order distinction has become so blurred that it is no longer particularly useful. Moreover, the conventional wisdom about animals should not be trusted. It could be, for example, that the distribution of PRM in the animal kingdom is wider than the distribution of global broadcasting.
Spontaneous Emergence of Computation in Network Cascades
Neuronal network computation and computation by avalanche supporting networks are of interest to the fields of physics, computer science (computation theory as well as statistical or machine learning) and neuroscience. Here we show that computation of complex Boolean functions arises spontaneously in threshold networks as a function of connectivity and antagonism (inhibition), computed by logic automata (motifs) in the form of computational cascades. We explain the emergent inverse relationship between the computational complexity of the motifs and their rank-ordering by function probabilities due to motifs, and its relationship to symmetry in function space. We also show that the optimal fraction of inhibition observed here supports results in computational neuroscience, relating to optimal information processing.
A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens
We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119
Invariant neural subspaces maintained by feedback modulation
This session is a double feature of the Cologne Theoretical Neuroscience Forum and the Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience of the Jülich Research Center.
Exploration-Based Approach for Computationally Supported Design-by-Analogy
Engineering designers practice design-by-analogy (DbA) during concept generation to retrieve knowledge from external sources or memory as inspiration to solve design problems. DbA is a tool for innovation that involves retrieving analogies from a source domain and transferring the knowledge to a target domain. While DbA produces innovative results, designers often come up with analogies by themselves or through serendipitous, random encounters. Computational support systems for searching analogies have been developed to facilitate DbA in systematic design practice. However, many systems have focused on a query-based approach, in which a designer inputs a keyword or a query function and is returned a set of algorithmically determined stimuli. In this presentation, a new analogical retrieval process that leverages a visual interaction technique is introduced. It enables designers to explore a space of analogies, rather than be constrained by what’s retrieved by a query-based algorithm. With an exploration-based DbA tool, designers have the potential to uncover more useful and unexpected inspiration for innovative design solutions.
Peripersonal space (PPS) as a primary interface for self-environment interactions
Peripersonal space (PPS) defines the portion of space where interactions between our body and the external environment more likely occur. There is no physical boundary defining the PPS with respect to the extrapersonal space, but PPS is continuously constructed by a dedicated neural system integrating external stimuli and tactile stimuli on the body, as a function of their potential interaction. This mechanism represents a primary interface between the individual and the environment. In this talk, I will present most recent evidence and highlight the current debate about the neural and computational mechanisms of PPS, its main functions and properties. I will discuss novel data showing how PPS dynamically shapes to optimize body-environment interactions. I will describe a novel electrophysiological paradigm to study and measure PPS, and show how this has been used to search for a basic marker of potentials of self-environment interaction in newborns and patients with disorders of consciousness. Finally, I will discuss how PPS is also involved in, and in turn shaped by, social interactions. Under these acceptances, I will discuss how PPS plays a key role in self-consciousness.
Where do problem spaces come from? On metaphors and representational change
The challenges of problem solving do not exclusively lie in how to perform heuristic search, but they begin with how we understand a given task: How to cognitively represent the task domain and its components can determine how quickly someone is able to progress towards a solution, whether advanced strategies can be discovered, or even whether a solution is found at all. While this challenge of constructing and changing representations has been acknowledged early on in problem solving research, for the most part it has been sidestepped by focussing on simple, well-defined problems whose representation is almost fully determined by the task instructions. Thus, the established theory of problem solving as heuristic search in problem spaces has little to say on this. In this talk, I will present a study designed to explore this issue, by virtue of finding and refining an adequate problem representation being its main challenge. In this exploratory case study, it was investigated how pairs of participants acquaint themselves with a complex spatial transformation task in the domain of iterated mental paper folding over the course of several days. Participants have to understand the geometry of edges which occurs when repeatedly mentally folding a sheet of paper in alternating directions without the use of external aids. Faced with the difficulty of handling increasingly complex folds in light of limited cognitive capacity, participants are forced to look for ways in which to represent folds more efficiently. In a qualitative analysis of video recordings of the participants' behaviour, the development of their conceptualisation of the task domain was traced over the course of the study, focussing especially on their use of gesture and the spontaneous occurrence and use of metaphors in the construction of new representations. Based on these observations, I will conclude the talk with several theoretical speculations regarding the roles of metaphor and cognitive capacity in representational change.
Heading perception in crowded environments
Self-motion through a visual world creates a pattern of expanding visual motion called optic flow. Heading estimation from the optic flow is accurate in rigid environments. But it becomes challenging when other humans introduce an independent motion to the scene. The biological motion of human walkers consists of translation through space and associated limb articulation. The characteristic motion pattern is regular, though complex. A world full of humans moving around is nonrigid, causing heading errors. But limb articulation alone does not perturb the global structure of the flow field, matching the rigidity assumption. For heading perception from optic flow analysis, limb articulation alone should not impair heading estimates. But we observed heading biases when participants encountered a group of point-light walkers. Our research investigates the interactions between optic flow perception and biological motion perception. We further analyze the impact of environmental information.
An investigation of perceptual biases in spiking recurrent neural networks trained to discriminate time intervals
Magnitude estimation and stimulus discrimination tasks are affected by perceptual biases that cause the stimulus parameter to be perceived as shifted toward the mean of its distribution. These biases have been extensively studied in psychophysics and, more recently and to a lesser extent, with neural activity recordings. New computational techniques allow us to train spiking recurrent neural networks on the tasks used in the experiments. This provides us with another valuable tool with which to investigate the network mechanisms responsible for the biases and how behavior could be modeled. As an example, in this talk I will consider networks trained to discriminate the durations of temporal intervals. The trained networks presented the contraction bias, even though they were trained with a stimulus sequence without temporal correlations. The neural activity during the delay period carried information about the stimuli of the current trial and previous trials, this being one of the mechanisms that originated the contraction bias. The population activity described trajectories in a low-dimensional space and their relative locations depended on the prior distribution. The results can be modeled as an ideal observer that during the delay period sees a combination of the current and the previous stimuli. Finally, I will describe how the neural trajectories in state space encode an estimate of the interval duration. The approach could be applied to other cognitive tasks.
In pursuit of a universal, biomimetic iBCI decoder: Exploring the manifold representations of action in the motor cortex
My group pioneered the development of a novel intracortical brain computer interface (iBCI) that decodes muscle activity (EMG) from signals recorded in the motor cortex of animals. We use these synthetic EMG signals to control Functional Electrical Stimulation (FES), which causes the muscles to contract and thereby restores rudimentary voluntary control of the paralyzed limb. In the past few years, there has been much interest in the fact that information from the millions of neurons active during movement can be reduced to a small number of “latent” signals in a low-dimensional manifold computed from the multiple neuron recordings. These signals can be used to provide a stable prediction of the animal’s behavior over many months-long periods, and they may also provide the means to implement methods of transfer learning across individuals, an application that could be of particular importance for paralyzed human users. We have begun to examine the representation within this latent space, of a broad range of behaviors, including well-learned, stereotyped movements in the lab, and more natural movements in the animal’s home cage, meant to better represent a person’s daily activities. We intend to develop an FES-based iBCI that will restore voluntary movement across a broad range of motor tasks without need for intermittent recalibration. However, the nonlinearities and context dependence within this low-dimensional manifold present significant challenges.
The impact of spaceflight on sleep and circadian rhythms
What happens to human sleep and circadian rhythms in space? There are many challenges that affect sleep in space, including unusual patterns of light exposure and the influence of microgravity. This talk will review the causes and consequences of sleep loss and circadian misalignment during spaceflight and will discuss how missions to the Moon and Mars will be different than missions to the International Space Station.
Geometry of sequence working memory in macaque prefrontal cortex
How the brain stores a sequence in memory remains largely unknown. We investigated the neural code underlying sequence working memory using two-photon calcium imaging to record thousands of neurons in the prefrontal cortex of macaque monkeys memorizing and then reproducing a sequence of locations after a delay. We discovered a regular geometrical organization: The high-dimensional neural state space during the delay could be decomposed into a sum of low-dimensional subspaces, each storing the spatial location at a given ordinal rank, which could be generalized to novel sequences and explain monkey behavior. The rank subspaces were distributed across large overlapping neural groups, and the integration of ordinal and spatial information occurred at the collective level rather than within single neurons. Thus, a simple representational geometry underlies sequence working memory.
The functional connectome across temporal scales
The view of human brain function has drastically shifted over the last decade, owing to the observation that the majority of brain activity is intrinsic rather than driven by external stimuli or cognitive demands. Specifically, all brain regions continuously communicate in spatiotemporally organized patterns that constitute the functional connectome, with consequences for cognition and behavior. In this talk, I will argue that another shift is underway, driven by new insights from synergistic interrogation of the functional connectome using different acquisition methods. The human functional connectome is typically investigated with functional magnetic resonance imaging (fMRI) that relies on the indirect hemodynamic signal, thereby emphasizing very slow connectivity across brain regions. Conversely, more recent methodological advances demonstrate that fast connectivity within the whole-brain connectome can be studied with real-time methods such as electroencephalography (EEG). Our findings show that combining fMRI with scalp or intracranial EEG in humans, especially when recorded concurrently, paints a rich picture of neural communication across the connectome. Specifically, the connectome comprises both fast, oscillation-based connectivity observable with EEG, as well as extremely slow processes best captured by fMRI. While the fast and slow processes share an important degree of spatial organization, these processes unfold in a temporally independent manner. Our observations suggest that fMRI and EEG may be envisaged as capturing distinct aspects of functional connectivity, rather than intermodal measurements of the same phenomenon. Infraslow fluctuation-based and rapid oscillation-based connectivity of various frequency bands constitute multiple dynamic trajectories through a shared state space of discrete connectome configurations. The multitude of flexible trajectories may concurrently enable functional connectivity across multiple independent sets of distributed brain regions.
Mechanisms of visual circuit development: aligning topographic maps of space
Parametric control of flexible timing through low-dimensional neural manifolds
Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.
Turning spikes to space: The storage capacity of tempotrons with plastic synaptic dynamics
Neurons in the brain communicate through action potentials (spikes) that are transmitted through chemical synapses. Throughout the last decades, the question how networks of spiking neurons represent and process information has remained an important challenge. Some progress has resulted from a recent family of supervised learning rules (tempotrons) for models of spiking neurons. However, these studies have viewed synaptic transmission as static and characterized synaptic efficacies as scalar quantities that change only on slow time scales of learning across trials but remain fixed on the fast time scales of information processing within a trial. By contrast, signal transduction at chemical synapses in the brain results from complex molecular interactions between multiple biochemical processes whose dynamics result in substantial short-term plasticity of most connections. Here we study the computational capabilities of spiking neurons whose synapses are dynamic and plastic, such that each individual synapse can learn its own dynamics. We derive tempotron learning rules for current-based leaky-integrate-and-fire neurons with different types of dynamic synapses. Introducing ordinal synapses whose efficacies depend only on the order of input spikes, we establish an upper capacity bound for spiking neurons with dynamic synapses. We compare this bound to independent synapses, static synapses and to the well established phenomenological Tsodyks-Markram model. We show that synaptic dynamics in principle allow the storage capacity of spiking neurons to scale with the number of input spikes and that this increase in capacity can be traded for greater robustness to input noise, such as spike time jitter. Our work highlights the feasibility of a novel computational paradigm for spiking neural circuits with plastic synaptic dynamics: Rather than being determined by the fixed number of afferents, the dimensionality of a neuron's decision space can be scaled flexibly through the number of input spikes emitted by its input layer.
A Panoramic View on Vision
Statistics of natural scenes are not uniform - their structure varies dramatically from ground to sky. It remains unknown whether these non-uniformities are reflected in the large-scale organization of the early visual system and what benefits such adaptations would confer. By deploying an efficient coding argument, we predict that changes in the structure of receptive fields across visual space increase the efficiency of sensory coding. To test this experimentally, developed a simple, novel imaging system that is indispensable for studies at this scale. In agreement with our predictions, we could show that receptive fields of retinal ganglion cells change their shape along the dorsoventral axis, with a marked surround asymmetry at the visual horizon. Our work demonstrates that, according to principles of efficient coding, the panoramic structure of natural scenes is exploited by the retina across space and cell-types.
Integrators in short- and long-term memory
The accumulation and storage of information in memory is a fundamental computation underlying animal behavior. In many brain regions and task paradigms, ranging from motor control to navigation to decision-making, such accumulation is accomplished through neural integrator circuits that enable external inputs to move a system’s population-wide patterns of neural activity along a continuous attractor. In the first portion of the talk, I will discuss our efforts to dissect the circuit mechanisms underlying a neural integrator from a rich array of anatomical, physiological, and perturbation experiments. In the second portion of the talk, I will show how the accumulation and storage of information in long-term memory may also be described by attractor dynamics, but now within the space of synaptic weights rather than neural activity. Altogether, this work suggests a conceptual unification of seemingly distinct short- and long-term memory processes.
Exact coherent structures and transition to turbulence in a confined active nematic
Active matter describes a class of systems that are maintained far from equilibrium by driving forces acting on the constituent particles. Here I will focus on confined active nematics, which exhibit especially rich flow behavior, ranging from structured patterns in space and time to disordered turbulent flows. To understand this behavior, I will take a deterministic dynamical systems approach, beginning with the hydrodynamic equations for the active nematic. This approach reveals that the infinite-dimensional phase space of all possible flow configurations is populated by Exact Coherent Structures (ECS), which are exact solutions of the hydrodynamic equations with distinct and regular spatiotemporal structure; examples include unstable equilibria, periodic orbits, and traveling waves. The ECS are connected by dynamical pathways called invariant manifolds. The main hypothesis in this approach is that turbulence corresponds to a trajectory meandering in the phase space, transitioning between ECS by traveling on the invariant manifolds. Similar approaches have been successful in characterizing high Reynolds number turbulence of passive fluids. Here, I will present the first systematic study of active nematic ECS and their invariant manifolds and discuss their role in characterizing the phenomenon of active turbulence.
Invariant neural subspaces maintained by feedback modulation
Sensory systems reliably process incoming stimuli in spite of changes in context. Most recent models accredit this context invariance to an extraction of increasingly complex sensory features in hierarchical feedforward networks. Here, we study how context-invariant representations can be established by feedback rather than feedforward processing. We show that feedforward neural networks modulated by feedback can dynamically generate invariant sensory representations. The required feedback can be implemented as a slow and spatially diffuse gain modulation. The invariance is not present on the level of individual neurons, but emerges only on the population level. Mechanistically, the feedback modulation dynamically reorients the manifold of neural activity and thereby maintains an invariant neural subspace in spite of contextual variations. Our results highlight the importance of population-level analyses for understanding the role of feedback in flexible sensory processing.
Robustness in spiking networks: a geometric perspective
Neural systems are remarkably robust against various perturbations, a phenomenon that still requires a clear explanation. Here, we graphically illustrate how neural networks can become robust. We study spiking networks that generate low-dimensional representations, and we show that the neurons’ subthreshold voltages are confined to a convex region in a lower-dimensional voltage subspace, which we call a ‘bounding box.’ Any changes in network parameters (such as number of neurons, dimensionality of inputs, firing thresholds, synaptic weights, or transmission delays) can all be understood as deformations of this bounding box. Using these insights, we show that functionality is preserved as long as perturbations do not destroy the integrity of the bounding box. We suggest that the principles underlying robustness in these networks—low-dimensional representations, heterogeneity of tuning, and precise negative feedback—may be key to understanding the robustness of neural systems at the circuit level.
How does a neuron decide when and where to make a synapse?
Precise synaptic connectivity is a prerequisite for the function of neural circuits, yet individual neurons, taken out of their developmental context, readily form unspecific synapses. How does genetically encoded brain wiring deal with this apparent contradiction? Brain wiring is a developmental growth process that is not only characterized by precision, but also flexibility and robustness. As in any other growth process, cellular interactions are restricted in space and time. Correspondingly, molecular and cellular interactions are restricted to those that 'get to see' each other during development. This seminar will explore the question how neurons decide when and where to make synapses using the Drosophila visual system as a model. New findings reveal that pattern formation during growth and the kinetics of live neuronal interactions restrict synapse formation and partner choice for neurons that are not otherwise prevented from making incorrect synapses in this system. For example, cell biological mechanisms like autophagy as well as developmental temperature restrict inappropriate partner choice through a process of kinetic exclusion that critically contributes to wiring specificity. The seminar will explore these and other neuronal strategies when and where to make synapses during developmental growth that contribute to precise, flexible and robust outcomes in brain wiring.
The effect of gravity on the perception of distance and self-motion: a multisensory perspective
Gravity is a constant in our lives. It provides an internalized reference to which all other perceptions are related. We can experimentally manipulate the relationship between physical gravity with other cues to the direction of “up” using virtual reality - with either HMDs or specially built tilting environments - to explore how gravity contributes to perceptual judgements. The effect of gravity can also be cancelled by running experiments on the International Space Station in low Earth orbit. Changing orientation relative to gravity - or even just perceived orientation – affects your perception of how far away things are (they appear closer when supine or prone). Cancelling gravity altogether has a similar effect. Changing orientation also affects how much visual motion is needed to perceive a particular travel distance (you need less when supine or prone). Adapting to zero gravity has the opposite effect (you need more). These results will be discussed in terms of their practical consequences and the multisensory processes involved, in particular the response to visual-vestibular conflict.
Bootstrapping the auditory space map via an innate circuit
Bernstein Conference 2024
Dimensionality reduction beyond neural subspaces
Bernstein Conference 2024
Navigating through the Latent Spaces in Generative Models
Bernstein Conference 2024
Psychedelic space of neuronal population activity: emerging and disappearing contrastive dimensions
Bernstein Conference 2024
Revisiting efficient representations of space in multiscale place field populations
Bernstein Conference 2024
The space of high-dimensional Dalean, amplifying networks and the trade-off between stability and amplification
Bernstein Conference 2024
A control space for muscle state-dependent cortical influence during naturalistic motor behavior
COSYNE 2022
Exploiting color space geometry for visual stimulus design across animals
COSYNE 2022
Flexible inter-areal computations through low-rank communication subspaces
COSYNE 2022
Flexible inter-areal computations through low-rank communication subspaces
COSYNE 2022
Holographic activation of neural ensembles reveals both space and feature based cortical microcircuitry
COSYNE 2022
Inferring olfactory space from glomerular response data
COSYNE 2022
Holographic activation of neural ensembles reveals both space and feature based cortical microcircuitry
COSYNE 2022
Inferring olfactory space from glomerular response data
COSYNE 2022
Invariant neural subspaces maintained by feedback modulation
COSYNE 2022
Invariant neural subspaces maintained by feedback modulation
COSYNE 2022
Long-term motor learning creates structure within neural space that shapes motor adaptation
COSYNE 2022
Long-term motor learning creates structure within neural space that shapes motor adaptation
COSYNE 2022
“This Is My Spot!”: Social Determinants Regulate Space Utilization in Macaques.
COSYNE 2022
“This Is My Spot!”: Social Determinants Regulate Space Utilization in Macaques.
COSYNE 2022
Time-warped state space models for distinguishing movement type and vigor
COSYNE 2022
Time-warped state space models for distinguishing movement type and vigor
COSYNE 2022
Turning spikes to space through plastic synaptic dynamics
COSYNE 2022
Turning spikes to space through plastic synaptic dynamics
COSYNE 2022
An attractor model explains space-specific distractor biases in visual working memory
COSYNE 2023
“Attentional fingerprints” in conceptual space: Reliable, individuating patterns of visual attention revealed using natural language modeling
COSYNE 2023
Coherence influences the dimensionality of communication subspaces
COSYNE 2023
On the Context-Dependent Efficient Coding of Olfactory Spaces
COSYNE 2023
Discovery of Linked Neural and Behavioral Subspaces with External Dynamic Components Analysis
COSYNE 2023
A mechanistic model for the formation of globally consistent maps of space in complex environments
COSYNE 2023
Metric space learning in the hippocampus
COSYNE 2023
Reliable neural manifold decoding using low-distortion alignment of tangent spaces
COSYNE 2023
Representational dissimilarity metric spaces for stochastic neural networks
COSYNE 2023
The space of finite ring attractors: from theoretical principles to the fly compass system
COSYNE 2023
State Space Models for Classifying Grid Cell Spatial Sequences
COSYNE 2023
Switching state-space models enable decoding of replay across multiple spatial environments
COSYNE 2023
A vast space of compact strategies for effective decisions
COSYNE 2023
An adaptive state-space control framework for driving decision variables
COSYNE 2025
Backpropagation through space, time and the brain
COSYNE 2025
Brain state and visual stimulation differentially modulate inter-layer communication subspace in V1
COSYNE 2025