Network Motifs
network motifs
The Secret Bayesian Life of Ring Attractor Networks
Efficient navigation requires animals to track their position, velocity and heading direction (HD). Some animals’ behavior suggests that they also track uncertainties about these navigational variables, and make strategic use of these uncertainties, in line with a Bayesian computation. Ring-attractor networks have been proposed to estimate and track these navigational variables, for instance in the HD system of the fruit fly Drosophila. However, such networks are not designed to incorporate a notion of uncertainty, and therefore seem unsuited to implement dynamic Bayesian inference. Here, we close this gap by showing that specifically tuned ring-attractor networks can track both a HD estimate and its associated uncertainty, thereby approximating a circular Kalman filter. We identified the network motifs required to integrate angular velocity observations, e.g., through self-initiated turns, and absolute HD observations, e.g., visual landmark inputs, according to their respective reliabilities, and show that these network motifs are present in the connectome of the Drosophila HD system. Specifically, our network encodes uncertainty in the amplitude of a localized bump of neural activity, thereby generalizing standard ring attractor models. In contrast to such standard attractors, however, proper Bayesian inference requires the network dynamics to operate in a regime away from the attractor state. More generally, we show that near-Bayesian integration is inherent in generic ring attractor networks, and that their amplitude dynamics can account for close-to-optimal reliability weighting of external evidence for a wide range of network parameters. This only holds, however, if their connection strengths allow the network to sufficiently deviate from the attractor state. Overall, our work offers a novel interpretation of ring attractor networks as implementing dynamic Bayesian integrators. We further provide a principled theoretical foundation for the suggestion that the Drosophila HD system may implement Bayesian HD tracking via ring attractor dynamics.
Structure, Function, and Learning in Distributed Neuronal Networks
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.
Building a synthetic cell: Understanding the clock design and function
Clock networks containing the same central architectures may vary drastically in their potential to oscillate, raising the question of what controls robustness, one of the essential functions of an oscillator. We computationally generate an atlas of oscillators and found that, while core topologies are critical for oscillations, local structures substantially modulate the degree of robustness. Strikingly, two local structures, incoherent and coherent inputs, can modify a core topology to promote and attenuate its robustness, additively. The findings underscore the importance of local modifications to the performance of the whole network. It may explain why auxiliary structures not required for oscillations are evolutionary conserved. We also extend this computational framework to search hidden network motifs for other clock functions, such as tunability that relates to the capabilities of a clock to adjust timing to external cues. Experimentally, we developed an artificial cell system in water-in-oil microemulsions, within which we reconstitute mitotic cell cycles that can perform self-sustained oscillations for 30 to 40 cycles over multiple days. The oscillation profiles, such as period, amplitude, and shape, can be quantitatively varied with the concentrations of clock regulators, energy levels, droplet sizes, and circuit design. Such innate flexibility makes it crucial to studying clock functions of tunability and stochasticity at the single-cell level. Combined with a pressure-driven multi-channel tuning setup and long-term time-lapse fluorescence microscopy, this system enables a high-throughput exploration in multi-dimension continuous parameter space and single-cell analysis of the clock dynamics and functions. We integrate this experimental platform with mathematical modeling to elucidate the topology-function relation of biological clocks. With FRET and optogenetics, we also investigate spatiotemporal cell-cycle dynamics in both homogeneous and heterogeneous microenvironments by reconstructing subcellular compartments.