Latest

SeminarNeuroscience

Learning to see stuff

Roland W. Fleming
Giessen University
Mar 13, 2023

Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.

SeminarNeuroscience

Learning to see Stuff

Kate Storrs
Justus Liebig University Giessen
Oct 27, 2021

Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. How does the brain learn to see properties of the world—like the glossiness of a surface—that cannot be measured by any other senses? Recent advances in unsupervised deep learning may help shed light on material perception. I will show how an unsupervised deep neural network trained on an artificial environment of surfaces that have different shapes, materials and lighting, spontaneously comes to encode those factors in its internal representations. Most strikingly, the model makes patterns of errors in its perception of material that follow, on an image-by-image basis, the patterns of errors made by human observers. Unsupervised deep learning may provide a coherent framework for how many perceptual dimensions form, in material perception and beyond.

SeminarNeuroscience

Theory of gating in recurrent neural networks

Kamesh Krishnamurthy
Princeton University
Sep 16, 2020

Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) for processing sequential data, and also in neuroscience, to understand the emergent properties of networks of real neurons. Prior theoretical work in understanding the properties of RNNs has focused on models with additive interactions. However, real neurons can have gating i.e. multiplicative interactions, and gating is also a central feature of the best performing RNNs in machine learning. Here, we develop a dynamical mean-field theory (DMFT) to study the consequences of gating in RNNs. We use random matrix theory to show how gating robustly produces marginal stability and line attractors – important mechanisms for biologically-relevant computations requiring long memory. The long-time behavior of the gated network is studied using its Lyapunov spectrum, and the DMFT is used to provide a novel analytical expression for the maximum Lyapunov exponent demonstrating its close relation to relaxation-time of the dynamics. Gating is also shown to give rise to a novel, discontinuous transition to chaos, where the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity), contrary to a seminal result for additive RNNs. Critical surfaces and regions of marginal stability in the parameter space are indicated in phase diagrams, thus providing a map for principled parameter choices for ML practitioners. Finally, we develop a field-theory for gradients that arise in training, by incorporating the adjoint sensitivity framework from control theory in the DMFT. This paves the way for the use of powerful field-theoretic techniques to study training/gradients in large RNNs.

SeminarNeuroscienceRecording

Watching single molecules in action: How this can be used in neurodegeneration

David Klenerman
University of Cambridge
Apr 30, 2020

This talk aims to show how new physical methods can advance biological and biomedical research. A major advance in physical chemistry in the last two decades has been the development of quantitative methods to directly observe individual molecules in solution, attached to surfaces, in the membrane of live cells or more recently inside live cells. These single-molecule fluorescence studies have now reached a stage where they can provide new insights into important biological problems. After presenting the principles of these methods, I will give some examples from our current research to probe the molecular basis of neurodegeneration. Here we have used single-molecule fluorescence to detect and analyse the low concentrations of soluble protein aggregates thought to be responsible for Alzheimer’s disease and determine the mechanisms by which they damage neurons. Lastly, I will describe how fundamental science aimed at watching single molecules incorporating nucleotides into DNA gave rise to a new rapid method to sequence DNA that is now widely used.

ePosterNeuroscience

Learning static and motion cues to material by predicting moving surfaces

Kate Storrs,Roland Fleming

COSYNE 2022

ePosterNeuroscience

Learning static and motion cues to material by predicting moving surfaces

Kate Storrs,Roland Fleming

COSYNE 2022

surfaces coverage

7 items

Seminar5
ePoster2
Domain spotlight

Explore how surfaces research is advancing inside Neuro.

Visit domain