Geometry
geometry
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Why age-related macular degeneration is a mathematically tractable disease
Among all prevalent diseases with a central neurodegeneration, AMD can be considered the most promising in terms of prevention and early intervention, due to several factors surrounding the neural geometry of the foveal singularity. • Steep gradients of cell density, deployed in a radially symmetric fashion, can be modeled with a difference of Gaussian curves. • These steep gradients give rise to huge, spatially aligned biologic effects, summarized as the Center of Cone Resilience, Surround of Rod Vulnerability. • Widely used clinical imaging technology provides cellular and subcellular level information. • Data are now available at all timelines: clinical, lifespan, evolutionary • Snapshots are available from tissues (histology, analytic chemistry, gene expression) • A viable biogenesis model exists for drusen, the largest population-level intraocular risk factor for progression. • The biogenesis model shares molecular commonality with atherosclerotic cardiovascular disease, for which there has been decades of public health success. • Animal and cell model systems are emerging to test these ideas.
The Geometry of Decision-Making
Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small differences between remaining options (e.g., a very small difference in neuronal activity being in “favor” of one option) near these locations in space-time. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Geometry of concept learning
Understanding Human ability to learn novel concepts from just a few sensory experiences is a fundamental problem in cognitive neuroscience. I will describe a recent work with Ben Sorcher and Surya Ganguli (PNAS, October 2022) in which we propose a simple, biologically plausible, and mathematically tractable neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. Discrimination between novel concepts is performed by downstream neurons implementing ‘prototype’ decision rule, in which a test example is classified according to the nearest prototype constructed from the few training examples. We show that prototype few-shot learning achieves high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations. We develop a mathematical theory that links few-shot learning to the geometric properties of the neural concept manifolds and demonstrate its agreement with our numerical simulations across different DNNs as well as different layers. Intriguingly, we observe striking mismatches between the geometry of manifolds in intermediate stages of the primate visual pathway and in trained DNNs. Finally, we show that linguistic descriptors of visual concepts can be used to discriminate images belonging to novel concepts, without any prior visual experience of these concepts (a task known as ‘zero-shot’ learning), indicated a remarkable alignment of manifold representations of concepts in visual and language modalities. I will discuss ongoing effort to extend this work to other high level cognitive tasks.
Convex neural codes in recurrent networks and sensory systems
Neural activity in many sensory systems is organized on low-dimensional manifolds by means of convex receptive fields. Neural codes in these areas are constrained by this organization, as not every neural code is compatible with convex receptive fields. The same codes are also constrained by the structure of the underlying neural network. In my talk I will attempt to provide answers to the following natural questions: (i) How do recurrent circuits generate codes that are compatible with the convexity of receptive fields? (ii) How can we utilize the constraints imposed by the convex receptive field to understand the underlying stimulus space. To answer question (i), we describe the combinatorics of the steady states and fixed points of recurrent networks that satisfy the Dale’s law. It turns out the combinatorics of the fixed points are completely determined by two distinct conditions: (a) the connectivity graph of the network and (b) a spectral condition on the synaptic matrix. We give a characterization of exactly which features of connectivity determine the combinatorics of the fixed points. We also find that a generic recurrent network that satisfies Dale's law outputs convex combinatorial codes. To address question (ii), I will describe methods based on ideas from topology and geometry that take advantage of the convex receptive field properties to infer the dimension of (non-linear) neural representations. I will illustrate the first method by inferring basic features of the neural representations in the mouse olfactory bulb.
Universal function approximation in balanced spiking networks through convex-concave boundary composition
The spike-threshold nonlinearity is a fundamental, yet enigmatic, component of biological computation — despite its role in many theories, it has evaded definitive characterisation. Indeed, much classic work has attempted to limit the focus on spiking by smoothing over the spike threshold or by approximating spiking dynamics with firing-rate dynamics. Here, we take a novel perspective that captures the full potential of spike-based computation. Based on previous studies of the geometry of efficient spike-coding networks, we consider a population of neurons with low-rank connectivity, allowing us to cast each neuron’s threshold as a boundary in a space of population modes, or latent variables. Each neuron divides this latent space into subthreshold and suprathreshold areas. We then demonstrate how a network of inhibitory (I) neurons forms a convex, attracting boundary in the latent coding space, and a network of excitatory (E) neurons forms a concave, repellant boundary. Finally, we show how the combination of the two yields stable dynamics at the crossing of the E and I boundaries, and can be mapped onto a constrained optimization problem. The resultant EI networks are balanced, inhibition-stabilized, and exhibit asynchronous irregular activity, thereby closely resembling cortical networks of the brain. Moreover, we demonstrate how such networks can be tuned to either suppress or amplify noise, and how the composition of inhibitory convex and excitatory concave boundaries can result in universal function approximation. Our work puts forth a new theory of biologically-plausible computation in balanced spiking networks, and could serve as a novel framework for scalable and interpretable computation with spikes.
Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
Understanding the nature of neural representation is a central challenge of neuroscience. One common approach to this challenge is to compute receptive fields by correlating neural activity with external variables drawn from sensory signals. But these receptive fields are only meaningful to the experimenter, not the organism, because only the experimenter has access to both the neural activity and knowledge of the external variables. To understand neural representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. In this talk, I will discuss recent work from my lab examining the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems capture invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, we find that topological relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity across the neural population expresses an intrinsic representational geometry that mirrors the natural, extrinsic stimulus space. Combinatorial patterns across this intrinsic space directly represent complex vocal communication signals, do not require computation of receptive fields, and are in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
Signal in the Noise: models of inter-trial and inter-subject neural variability
The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
Membrane mechanics meet minimal manifolds
Changes in the geometry and topology of self-assembled membranes underlie diverse processes across cellular biology and engineering. Similar to lipid bilayers, monolayer colloidal membranes studied by the Sharma (IISc Bangalore) and Dogic (UCSB) Labs have in-plane fluid-like dynamics and out-of-plane bending elasticity, but their open edges and micron length scale provide a tractable system to study the equilibrium energetics and dynamic pathways of membrane assembly and reconfiguration. First, we discuss how doping colloidal membranes with short miscible rods transforms disk-shaped membranes into saddle-shaped minimal surfaces with complex edge structures. Theoretical modeling demonstrates that their formation is driven by increasing positive Gaussian modulus, which in turn is controlled by the fraction of short rods. Further coalescence of saddle-shaped surfaces leads to exotic topologically distinct structures, including shapes similar to catenoids, tri-noids, four-noids, and higher order structures. We then mathematically explore the mechanics of these catenoid-like structures subject to an external axial force and elucidate their intimate connection to two problems whose solutions date back to Euler: the shape of an area-minimizing soap film and the buckling of a slender rod under compression. A perturbation theory argument directly relates the tensions of membranes to the stability properties of minimal surfaces. We also investigate the effects of including a Gaussian curvature modulus, which, for small enough membranes, causes the axial force to diverge as the ring separation approaches its maximal value.
Where do problem spaces come from? On metaphors and representational change
The challenges of problem solving do not exclusively lie in how to perform heuristic search, but they begin with how we understand a given task: How to cognitively represent the task domain and its components can determine how quickly someone is able to progress towards a solution, whether advanced strategies can be discovered, or even whether a solution is found at all. While this challenge of constructing and changing representations has been acknowledged early on in problem solving research, for the most part it has been sidestepped by focussing on simple, well-defined problems whose representation is almost fully determined by the task instructions. Thus, the established theory of problem solving as heuristic search in problem spaces has little to say on this. In this talk, I will present a study designed to explore this issue, by virtue of finding and refining an adequate problem representation being its main challenge. In this exploratory case study, it was investigated how pairs of participants acquaint themselves with a complex spatial transformation task in the domain of iterated mental paper folding over the course of several days. Participants have to understand the geometry of edges which occurs when repeatedly mentally folding a sheet of paper in alternating directions without the use of external aids. Faced with the difficulty of handling increasingly complex folds in light of limited cognitive capacity, participants are forced to look for ways in which to represent folds more efficiently. In a qualitative analysis of video recordings of the participants' behaviour, the development of their conceptualisation of the task domain was traced over the course of the study, focussing especially on their use of gesture and the spontaneous occurrence and use of metaphors in the construction of new representations. Based on these observations, I will conclude the talk with several theoretical speculations regarding the roles of metaphor and cognitive capacity in representational change.
Efficient Random Codes in a Shallow Neural Network
Efficient coding has served as a guiding principle in understanding the neural code. To date, however, it has been explored mainly in the context of peripheral sensory cells with simple tuning curves. By contrast, ‘deeper’ neurons such as grid cells come with more complex tuning properties which imply a different, yet highly efficient, strategy for representing information. I will show that a highly efficient code is not specific to a population of neurons with finely tuned response properties: it emerges robustly in a shallow network with random synapses. Here, the geometry of population responses implies that optimality obtains from a tradeoff between two qualitatively different types of error: ‘local’ errors (common to classical neural population codes) and ‘global’ (or ‘catastrophic’) errors. This tradeoff leads to efficient compression of information from a high-dimensional representation to a low-dimensional one. After describing the theoretical framework, I will use it to re-interpret recordings of motor cortex in behaving monkey. Our framework addresses the encoding of (sensory) information; if time allows, I will comment on ongoing work that focuses on decoding from the perspective of efficient coding.
New prospects in shape morphing sheets: unexplored pathways, 4D printing, and autonomous actuation
Living organisms have mastered the dynamic control of stresses within sheets to induce shape transformation and locomotion. For instance, the spatiotemporal pattern of action potential in a heart yields a dynamical stress field leading to shape changes and biological function. Such structures inspired the development of theoretical tools and responsive materials alike. Yet, present attempts to mimic their rich dynamics and phenomenology in autonomous synthetic matter are still very limited. In this talk, I will present several complementing innovations toward this goal: novel shaping mechanisms that were overlooked by previous research, new fabrication techniques for programmable matter via 4D printing of gel structures, and most prominently, the first autonomous shape morphing membranes. The dynamical control over the geometry of the material is a prevalent theme in all of these achievements. In particular, the latter system demonstrates localized deformations, induced by a pattern-forming chemical reaction, that prescribe the patterns of curvature, leading to global shape evolution. Together, these developments present a route for modeling and producing fully autonomous soft membranes mimicking some of the locomotive capabilities of living organisms.
Geometry of sequence working memory in macaque prefrontal cortex
How the brain stores a sequence in memory remains largely unknown. We investigated the neural code underlying sequence working memory using two-photon calcium imaging to record thousands of neurons in the prefrontal cortex of macaque monkeys memorizing and then reproducing a sequence of locations after a delay. We discovered a regular geometrical organization: The high-dimensional neural state space during the delay could be decomposed into a sum of low-dimensional subspaces, each storing the spatial location at a given ordinal rank, which could be generalized to novel sequences and explain monkey behavior. The rank subspaces were distributed across large overlapping neural groups, and the integration of ordinal and spatial information occurred at the collective level rather than within single neurons. Thus, a simple representational geometry underlies sequence working memory.
Spatial uncertainty provides a unifying account of navigation behavior and grid field deformations
To localize ourselves in an environment for spatial navigation, we rely on vision and self-motion inputs, which only provide noisy and partial information. It is unknown how the resulting uncertainty affects navigation behavior and neural representations. Here we show that spatial uncertainty underlies key effects of environmental geometry on navigation behavior and grid field deformations. We develop an ideal observer model, which continually updates probabilistic beliefs about its allocentric location by optimally combining noisy egocentric visual and self-motion inputs via Bayesian filtering. This model directly yields predictions for navigation behavior and also predicts neural responses under population coding of location uncertainty. We simulate this model numerically under manipulations of a major source of uncertainty, environmental geometry, and support our simulations by analytic derivations for its most salient qualitative features. We show that our model correctly predicts a wide range of experimentally observed effects of the environmental geometry and its change on homing response distribution and grid field deformation. Thus, our model provides a unifying, normative account for the dependence of homing behavior and grid fields on environmental geometry, and identifies the unavoidable uncertainty in navigation as a key factor underlying these diverse phenomena.
Parametric control of flexible timing through low-dimensional neural manifolds
Biological brains possess an exceptional ability to infer relevant behavioral responses to a wide range of stimuli from only a few examples. This capacity to generalize beyond the training set has been proven particularly challenging to realize in artificial systems. How neural processes enable this capacity to extrapolate to novel stimuli is a fundamental open question. A prominent but underexplored hypothesis suggests that generalization is facilitated by a low-dimensional organization of collective neural activity, yet evidence for the underlying neural mechanisms remains wanting. Combining network modeling, theory and neural data analysis, we tested this hypothesis in the framework of flexible timing tasks, which rely on the interplay between inputs and recurrent dynamics. We first trained recurrent neural networks on a set of timing tasks while minimizing the dimensionality of neural activity by imposing low-rank constraints on the connectivity, and compared the performance and generalization capabilities with networks trained without any constraint. We then examined the trained networks, characterized the dynamical mechanisms underlying the computations, and verified their predictions in neural recordings. Our key finding is that low-dimensional dynamics strongly increases the ability to extrapolate to inputs outside of the range used in training. Critically, this capacity to generalize relies on controlling the low-dimensional dynamics by a parametric contextual input. We found that this parametric control of extrapolation was based on a mechanism where tonic inputs modulate the dynamics along non-linear manifolds in activity space while preserving their geometry. Comparisons with neural recordings in the dorsomedial frontal cortex of macaque monkeys performing flexible timing tasks confirmed the geometric and dynamical signatures of this mechanism. Altogether, our results tie together a number of previous experimental findings and suggest that the low-dimensional organization of neural dynamics plays a central role in generalizable behaviors.
Deforming the metric of cognitive maps distorts memory
Environmental boundaries anchor cognitive maps that support memory. However, trapezoidal boundary geometry distorts the regular firing patterns of entorhinal grid cells proposedly providing a metric for cognitive maps. Here, we test the impact of trapezoidal boundary geometry on human spatial memory using immersive virtual reality. Consistent with reduced regularity of grid patterns in rodents and a grid-cell model based on the eigenvectors of the successor representation, human positional memory was degraded in a trapezoid compared to a square environment; an effect particularly pronounced in the trapezoid’s narrow part. Congruent with spatial frequency changes of eigenvector grid patterns, distance estimates between remembered positions were persistently biased; revealing distorted memory maps that explained behavior better than the objective maps. Our findings demonstrate that environmental geometry affects human spatial memory similarly to rodent grid cell activity — thus strengthening the putative link between grid cells and behavior along with their cognitive functions beyond navigation.
The Geometry of Decision-Making
Choosing among spatially distributed options is a central challenge for animals, from deciding among alternative potential food sources or refuges, to choosing with whom to associate. Here, using an integrated theoretical and experimental approach (employing immersive Virtual Reality), with both invertebrate and vertebrate models—the fruit fly, desert locust and zebrafish—we consider the recursive interplay between movement and collective vectorial integration in the brain during decision-making regarding options (potential ‘targets’) in space. We reveal that the brain repeatedly breaks multi-choice decisions into a series of abrupt (critical) binary decisions in space-time where organisms switch, spontaneously, from averaging vectorial information among, to suddenly excluding one of, the remaining options. This bifurcation process repeats until only one option—the one ultimately selected—remains. Close to each bifurcation the ‘susceptibility’ of the system exhibits a sharp increase, inevitably causing small differences among the remaining options to become amplified; a property that both comes ‘for free’ and is highly desirable for decision-making. This mechanism facilitates highly effective decision-making, and is shown to be robust both to the number of options available, and to context, such as whether options are static (e.g. refuges) or mobile (e.g. other animals). In addition, we find evidence that the same geometric principles of decision-making occur across scales of biological organisation, from neural dynamics to animal collectives, suggesting they are fundamental features of spatiotemporal computation.
Learning the structure and investigating the geometry of complex networks
Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.
Locomotion of Helicobacter pylori: Cell geometry and active confinement
Sparse expansion in cerebellum favours learning speed and performance in the context of motor control
The cerebellum contains more than half of the brain’s neurons and it is essential for motor control. Its neural circuits have a distinctive architecture comprised of a large, sparse expansion from the input mossy fibres to the granule cell layer. For years, theories of how cerebellar architectural features relate to cerebellar function have been formulated. It has been shown that some of these features can facilitate pattern separation. However, these theories don’t consider the need for it to learn fast in order to control smooth and accurate movements. Here, we confront this gap. This talk will show that the expansion to the granule cell layer in the cerebellar cortex improves learning speed and performance in the context of motor control by considering a cerebellar-like network learning an internal model of a motor apparatus online. By expressing the general form of the learning rate for such a system, this talk will provide a calculation of how increasing the number of granule cells diminishes the effect of noise and increases the learning speed. The researchers propose that the particular architecture of cerebellar circuits modifies the geometry of the error function in a favourable way for learning faster. Their results illuminate a new link between cerebellar structure and function.
A geometric framework to predict structure from function in neural networks
The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.
Tutorial talk: Lipid bilayers and the geometry of surfaces
Linking dimensionality to computation in neural networks
The link between behavior, learning and the underlying connectome is a fundamental open problem in neuroscience. In my talk I will show how it is possible to develop a theory that bridges across these three levels (animal behavior, learning and network connectivity) based on the geometrical properties of neural activity. The central tool in my approach is the dimensionality of neural activity. I will link animal complex behavior to the geometry of neural representations, specifically their dimensionality; I will then show how learning shapes changes in such geometrical properties and how local connectivity properties can further regulate them. As a result, I will explain how the complexity of neural representations emerges from both behavioral demands (top-down approach) and learning or connectivity features (bottom-up approach). I will build these results regarding neural dynamics and representations starting from the analysis of neural recordings, by means of theoretical and computational tools that blend dynamical systems, artificial intelligence and statistical physics approaches.
Motor Cortex in Theory and Practice
A central question in motor physiology has been whether motor cortex activity resembles muscle activity, and if not, why not? Over fifty years, extensive observations have failed to provide a concise answer, and the topic remains much debated. To provide a different perspective, we employed a novel behavioral paradigm that affords extensive comparison between time-evolving neural and muscle activity. Single motor-cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ’trajectory tangling’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of trajectory tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low tangling confers noise robustness. Remarkably, we were able to predict motor cortex activity from muscle activity alone, by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low tangling. Our results argue that motor cortex embeds descending commands in additional structure that ensure low tangling, and thus noise-robustness. The dominant structure in motor cortex may thus serve not a representational function (encoding specific variables) but a computational function: ensuring that outgoing commands can be generated reliably. Our results establish the utility of an emerging approach: understanding the structure of neural activity based on properties of population geometry that flow from normative principles such as noise robustness.
Multitask performance humans and deep neural networks
Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.
Endless forms most beautiful: how to program materials using geometry, topology and singularities
The dream of programmable matter is to create materials whose physical properties (shape, moduli, response to perturbations, etc.) can be changed on the fly. For many years, my group has been thinking about how to program flat sheets that fold up into three dimensional shapes, most recently by exploiting the principles of origami design. Unfortunately, a combinatorial explosion of folding pathways makes robust folding particularly challenging. In this talk, I will discuss how this pluripotency arises from the topology of the configuration space. This suggests a broader understanding of a larger class of materials spanning from folding forms to spring networks to mechanical structures that perform computational logic.
Transport and dispersion of active particles in complex porous media
Understanding the transport of microorganisms and self-propelled particles in porous media has important consequences in human health as well as for microbial ecology. In this work, we explore models for the dispersion of active particles in both periodic and random porous media. In a first problem, we analyze the long-time transport properties in a dilute system of active Brownian particles swimming in a periodic lattice in the presence of an external flow. Using generalized Taylor dispersion theory, we calculate the mean transport velocity and dispersion dyadic and explain their dependence on flow strength, swimming activity and geometry. In a second approach, we address the case of run-and-tumble particles swimming through unstructured porous media composed of randomly distributed circular pillars. There, we show that the long-time dispersion is described by a universal hindrance function that depends on the medium porosity and ratio of the swimmer run length to the pillar size. An asymptotic expression for the hindrance function is derived in dilute media, and its extension to semi-dilute and dense media is obtained using stochastic simulations. We conclude by discussing the role of hydrodynamic interactions and swimmer concentration effects.
“Biophysics of Structural Plasticity in Postsynaptic Spines”
The ability of the brain to encode and store information depends on the plastic nature of the individual synapses. The increase and decrease in synaptic strength, mediated through the structural plasticity of the spine, are important for learning, memory, and cognitive function. Dendritic spines are small structures that contain the synapse. They come in a variety of shapes (stubby, thin, or mushroom-shaped) and a wide range of sizes that protrude from the dendrite. These spines are the regions where the postsynaptic biochemical machinery responds to the neurotransmitters. Spines are dynamic structures, changing in size, shape, and number during development and aging. While spines and synapses have inspired neuromorphic engineering, the biophysical events underlying synaptic and structural plasticity of single spines remain poorly understood. Our current focus is on understanding the biophysical events underlying structural plasticity. I will discuss recent efforts from my group — first, a systems biology approach to construct a mathematical model of biochemical signaling and actin-mediated transient spine expansion in response to calcium influx caused by NMDA receptor activation and a series of spatial models to study the role of spine geometry and organelle location within the spine for calcium and cyclic AMP signaling. Second, I will discuss how mechanics of membrane-cytoskeleton interactions can give insight into spine shape region. And I will conclude with some new efforts in using reconstructions from electron microscopy to inform computational domains. I will conclude with how geometry and mechanics plays an important role in our understanding of fundamental biological phenomena and some general ideas on bio-inspired engineering.
The geometry of abstraction in hippocampus and pre-frontal cortex
The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. Here we characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.
Design Principles of Living Matter
In this talk, I will describe my lab’s recent efforts to understand the design principles of the active, soft materials that drive cell morphogenesis. In particular, we are interested in how collections of myosin II motors and actin polymers generate, relax, sense and adapt to mechanical force. I will discuss how motor-filament interactions lead to either distributed extensile or contractile stresses as the mechanics of the system changes from fluid to solid. Using optical control of motors, we are now exploring how spatially structured stress can be used to drive local flows and motion. If time, I will also describe how feedbacks between local geometry and activity can be harnessed to drive morphogenetic changes in model systems.
Untangling the web of behaviours used to produce spider orb webs
Many innate behaviours are the result of multiple sensorimotor programs that are dynamically coordinated to produce higher-order behaviours such as courtship or architecture construction. Extendend phenotypes such as architecture are especially useful for ethological study because the structure itself is a physical record of behavioural intent. A particularly elegant and easily quantifiable structure is the spider orb-web. The geometric symmetry and regularity of these webs have long generated interest in their behavioural origin. However, quantitative analyses of this behaviour have been sparse due to the difficulty of recording web-making in real-time. To address this, we have developed a novel assay enabling real-time, high-resolution tracking of limb movements and web structure produced by the hackled orb-weaver Uloborus diversus. With its small brain size of approximately 100,000 neurons, the spider U. diversus offers a tractable model organism for the study of complex behaviours. Using deep learning frameworks for limb tracking, and unsupervised behavioural clustering methods, we have developed an atlas of stereotyped movement motifs and are investigating the behavioural state transitions of which the geometry of the web is an emergent property. In addition to tracking limb movements, we have developed algorithms to track the web’s dynamic graph structure. We aim to model the relationship between the spider’s sensory experience on the web and its motor decisions, thereby identifying the sensory and internal states contributing to this sensorimotor transformation. Parallel efforts in our group are establishing 2-photon in vivo calcium imaging protocols in this spider, eventually facilitating a search for neural correlates underlying the internal and sensory state variables identified by our behavioural models. In addition, we have assembled a genome, and are developing genetic perturbation methods to investigate the genetic underpinnings of orb-weaving behaviour. Together, we aim to understand how complex innate behaviours are coordinated by underlying neuronal and genetic mechanisms.
Hyperalignment: Modeling shared information encoded in idiosyncratic cortical topographies
Information that is shared across brains is encoded in idiosyncratic fine-scale functional topographies. Hyperalignment jointly models shared information and idiosyncratic topographies. Pattern vectors for neural responses and connectivities are projected into a common, high-dimensional information space, rather than being aligned in a canonical anatomical space. Hyperalignment calculates individual transformation matrices that preserve the geometry of pairwise dissimilarities between pattern vectors. Individual cortical topographies are modeled as mixtures of overlapping, individual-specific topographic basis functions, rather than as contiguous functional areas. The fundamental property of brain function that is preserved across brains is information content, rather than the functional properties of local features that support that content.
High-dimensional geometry of visual cortex
Interpreting high-dimensional datasets requires new computational and analytical methods. We developed such methods to extract and analyze neural activity from 20,000 neurons recorded simultaneously in awake, behaving mice. The neural activity was not low-dimensional as commonly thought, but instead was high-dimensional and obeyed a power-law scaling across its eigenvalues. We developed a theory that proposes that neural responses to external stimuli maximize information capacity while maintaining a smooth neural code. We then observed power-law eigenvalue scaling in many real-world datasets, and therefore developed a nonlinear manifold embedding algorithm called Rastermap that can capture such high-dimensional structure.
Geometry of Neural Computation Unifies Working Memory and Planning
Cognitive tasks typically require the integration of working memory, contextual processing, and planning to be carried out in close coordination. However, these computations are typically studied within neuroscience as independent modular processes in the brain. In this talk I will present an alternative view, that neural representations of mappings between expected stimuli and contingent goal actions can unify working memory and planning computations. We term these stored maps contingency representations. We developed a "conditional delayed logic" task capable of disambiguating the types of representations used during performance of delay tasks. Human behaviour in this task is consistent with the contingency representation, and not with traditional sensory models of working memory. In task-optimized artificial recurrent neural network models, we investigated the representational geometry and dynamical circuit mechanisms supporting contingency-based computation, and show how contingency representation explains salient observations of neuronal tuning properties in prefrontal cortex. Finally, our theory generates novel and falsifiable predictions for single-unit and population neural recordings.
The geometry of abstraction in artificial and biological neural networks
The curse of dimensionality plagues models of reinforcement learning and decision-making. The process of abstraction solves this by constructing abstract variables describing features shared by different specific instances, reducing dimensionality and enabling generalization in novel situations. We characterized neural representations in monkeys performing a task where a hidden variable described the temporal statistics of stimulus-response-outcome mappings. Abstraction was defined operationally using the generalization performance of neural decoders across task conditions not used for training. This type of generalization requires a particular geometric format of neural representations. Neural ensembles in dorsolateral pre-frontal cortex, anterior cingulate cortex and hippocampus, and in simulated neural networks, simultaneously represented multiple hidden and explicit variables in a format reflecting abstraction. Task events engaging cognitive operations modulated this format. These findings elucidate how the brain and artificial systems represent abstract variables, variables critical for generalization that in turn confers cognitive flexibility.
Exploiting color space geometry for visual stimulus design across animals
COSYNE 2022
The geometry of cortical representations of touch in rodents
COSYNE 2022
The geometry of cortical representations of touch in rodents
COSYNE 2022
The geometry of map-like representations under dynamic cognitive control
COSYNE 2022
The geometry of map-like representations under dynamic cognitive control
COSYNE 2022
Irrational choice via curvilinear value geometry in ventromedial prefrontal cortex
COSYNE 2022
Irrational choice via curvilinear value geometry in ventromedial prefrontal cortex
COSYNE 2022
The neural code controls the geometry of probabilistic inference in early olfactory processing
COSYNE 2022
The neural code controls the geometry of probabilistic inference in early olfactory processing
COSYNE 2022
Neural mechanisms for collision avoidance exploiting positional geometry
COSYNE 2022
Neural mechanisms for collision avoidance exploiting positional geometry
COSYNE 2022
Neuronal implementation of the representational geometry in prefrontal working memory
COSYNE 2022
Neuronal implementation of the representational geometry in prefrontal working memory
COSYNE 2022
The representational geometry of social memory in the hippocampus
COSYNE 2022
The representational geometry of social memory in the hippocampus
COSYNE 2022
The shared geometry of biological and recurrent neural network dynamics
COSYNE 2023
The geometry and role of sequential activity in olfactory processing
COSYNE 2023
Hippocampal CA2 modulates its geometry to solve the memory-generalization tradeoff for social memory
COSYNE 2023
Learning in neural networks with brain-inspired geometry
COSYNE 2023
Neural Population Geometry across model scale: A tool for cross-species functional comparison of visual brain regions
COSYNE 2023
Stable geometry is inevitable in drifting neural representations
COSYNE 2023
Task switching differentially perturbs neural geometry in the human frontal and temporal lobes
COSYNE 2023
Disrupted Egocentric Vector Coding of Environmental Geometry in Alzheimer’s Disease Mouse Model
COSYNE 2025
The geometry and role of sequential activity in sensory processing and perceptual generalization
COSYNE 2025
Large-scale geometry of cortical dynamics underlying evidence accumulation and short-term memory
COSYNE 2025
Retrosplenial Parvalbumin Interneurons Gate the Egocentric Vector Coding of Environmental Geometry
COSYNE 2025
Interaction of actin dynamics and spine geometry acts as a synaptic tag
FENS Forum 2024
Predictive learning shapes the representational geometry of the human brain
FENS Forum 2024
Revealing the geometry of neuronal population dynamics and scaling of neuronal dimensionality using cortex-wide volumetric recording of neuroactivity at cellular resolution
FENS Forum 2024
The Fractal Geometry of Alzheimer’s disease Toward Better Cognitive Assessment
Neuromatch 5
Population geometry enables fast sampling in spiking neural networks
Neuromatch 5