Flatiron
Flatiron
Eero Simoncelli, Ph.D.
The Center for Neural Science at New York University (NYU), jointly with the Center for Computational Neuroscience (CCN) at the Flatiron Institute of the Simons Foundation, invites applications for an open rank joint position, with a preference for junior or mid-career candidates. We seek exceptional candidates that use computational frameworks to develop concepts, models, and tools for understanding brain function. Areas of interest include sensory representation and perception, memory, decision-making, adaptation and learning, and motor control. A Ph.D. in a relevant field, such as neuroscience, engineering, physics or applied mathematics, is required. Review of applications will begin 28 March 2021. Further information: * Joint position: https://apply.interfolio.com/83845 * NYU Center for Neural Science: https://www.cns.nyu.edu/ * Flatiron Institute Center for Computational Neuroscience: https://www.simonsfoundation.org/flatiron/center-for-computational-neuroscience/
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Signal in the Noise: models of inter-trial and inter-subject neural variability
The ability to record large neural populations—hundreds to thousands of cells simultaneously—is a defining feature of modern systems neuroscience. Aside from improved experimental efficiency, what do these technologies fundamentally buy us? I'll argue that they provide an exciting opportunity to move beyond studying the "average" neural response. That is, by providing dense neural circuit measurements in individual subjects and moments in time, these recordings enable us to track changes across repeated behavioral trials and across experimental subjects. These two forms of variability are still poorly understood, despite their obvious importance to understanding the fidelity and flexibility of neural computations. Scientific progress on these points has been impeded by the fact that individual neurons are very noisy and unreliable. My group is investigating a number of customized statistical models to overcome this challenge. I will mention several of these models but focus particularly on a new framework for quantifying across-subject similarity in stochastic trial-by-trial neural responses. By applying this method to noisy representations in deep artificial networks and in mouse visual cortex, we reveal that the geometry of neural noise correlations is a meaningful feature of variation, which is neglected by current methods (e.g. representational similarity analysis).
Multi-level theory of neural representations in the era of large-scale neural recordings: Task-efficiency, representation geometry, and single neuron properties
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of representations in neural circuits. In this talk, we will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from the structure in neural populations and from biologically plausible neural networks. First, we will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes a perceptron’s capacity for linearly classifying object categories based on the underlying neural manifolds’ structural properties. Next, we will describe how such methods can, in fact, open the ‘black box’ of distributed neuronal circuits in a range of experimental neural datasets. In particular, our method overcomes the limitations of traditional dimensionality reduction techniques, as it operates directly on the high-dimensional representations, rather than relying on low-dimensionality assumptions for visualization. Furthermore, this method allows for simultaneous multi-level analysis, by measuring geometric properties in neural population data, and estimating the amount of task information embedded in the same population. These geometric frameworks are general and can be used across different brain areas and task modalities, as demonstrated in the work of ours and others, ranging from the visual cortex to parietal cortex to hippocampus, and from calcium imaging to electrophysiology to fMRI datasets. Finally, we will discuss our recent efforts to fully extend this multi-level description of neural populations, by (1) investigating how single neuron properties shape the representation geometry in early sensory areas, and by (2) understanding how task-efficient neural manifolds emerge in biologically-constrained neural networks. By extending our mathematical toolkit for analyzing representations underlying complex neuronal networks, we hope to contribute to the long-term challenge of understanding the neuronal basis of tasks and behaviors.
Membrane mechanics meet minimal manifolds
Changes in the geometry and topology of self-assembled membranes underlie diverse processes across cellular biology and engineering. Similar to lipid bilayers, monolayer colloidal membranes studied by the Sharma (IISc Bangalore) and Dogic (UCSB) Labs have in-plane fluid-like dynamics and out-of-plane bending elasticity, but their open edges and micron length scale provide a tractable system to study the equilibrium energetics and dynamic pathways of membrane assembly and reconfiguration. First, we discuss how doping colloidal membranes with short miscible rods transforms disk-shaped membranes into saddle-shaped minimal surfaces with complex edge structures. Theoretical modeling demonstrates that their formation is driven by increasing positive Gaussian modulus, which in turn is controlled by the fraction of short rods. Further coalescence of saddle-shaped surfaces leads to exotic topologically distinct structures, including shapes similar to catenoids, tri-noids, four-noids, and higher order structures. We then mathematically explore the mechanics of these catenoid-like structures subject to an external axial force and elucidate their intimate connection to two problems whose solutions date back to Euler: the shape of an area-minimizing soap film and the buckling of a slender rod under compression. A perturbation theory argument directly relates the tensions of membranes to the stability properties of minimal surfaces. We also investigate the effects of including a Gaussian curvature modulus, which, for small enough membranes, causes the axial force to diverge as the ring separation approaches its maximal value.
Structure, Function, and Learning in Distributed Neuronal Networks
A central goal in neuroscience is to understand how orchestrated computations in the brain arise from the properties of single neurons and networks of such neurons. Answering this question requires theoretical advances that shine light into the ‘black box’ of neuronal networks. In this talk, I will demonstrate theoretical approaches that help describe how cognitive and behavioral task implementations emerge from structure in neural populations and from biologically plausible learning rules. First, I will introduce an analytic theory that connects geometric structures that arise from neural responses (i.e., neural manifolds) to the neural population’s efficiency in implementing a task. In particular, this theory describes how easy or hard it is to discriminate between object categories based on the underlying neural manifolds’ structural properties. Next, I will describe how such methods can, in fact, open the ‘black box’ of neuronal networks, by showing how we can understand a) the role of network motifs in task implementation in neural networks and b) the role of neural noise in adversarial robustness in vision and audition. Finally, I will discuss my recent efforts to develop biologically plausible learning rules for neuronal networks, inspired by recent experimental findings in synaptic plasticity. By extending our mathematical toolkit for analyzing representations and learning rules underlying complex neuronal networks, I hope to contribute toward the long-term challenge of understanding the neuronal basis of behaviors.