Cnns
CNNs
Kerstin Ritter
The Department of Machine Learning for Clinical Neuroscience is currently recruiting PhD candidates and Postdocs. We develop advanced machine and deep learning models to analyze diverse clinical data, including neuroimaging, psychometric, clinical, smartphone, and omics datasets. While focusing on methodological challenges (explainability, robustness, multimodal data integration, causality etc.), the main goal is to enhance early diagnosis, predict disease progression, and personalize treatment for neurological and psychiatric diseases in diverse clinical settings. We offer an exciting and supportive environment with access to state-of-the-art compute facilities, mentoring and career advice through experienced faculty. Hertie AI closely collaborates with the world-class AI ecosystem in Tübingen (e.g. Cyber Valley, Cluster of Excellence “Machine Learning in Science”, Tübingen AI Center).
NMC4 Short Talk: Directly interfacing brain and deep networks exposes non-hierarchical visual processing
A recent approach to understanding the mammalian visual system is to show correspondence between the sequential stages of processing in the ventral stream with layers in a deep convolutional neural network (DCNN), providing evidence that visual information is processed hierarchically, with successive stages containing ever higher-level information. However, correspondence is usually defined as shared variance between brain region and model layer. We propose that task-relevant variance is a stricter test: If a DCNN layer corresponds to a brain region, then substituting the model’s activity with brain activity should successfully drive the model’s object recognition decision. Using this approach on three datasets (human fMRI and macaque neuron firing rates) we found that in contrast to the hierarchical view, all ventral stream regions corresponded best to later model layers. That is, all regions contain high-level information about object category. We hypothesised that this is due to recurrent connections propagating high-level visual information from later regions back to early regions, in contrast to the exclusively feed-forward connectivity of DCNNs. Using task-relevant correspondence with a late DCNN layer akin to a tracer, we used Granger causal modelling to show late-DCNN correspondence in IT drives correspondence in V4. Our analysis suggests, effectively, that no ventral stream region can be appropriately characterised as ‘early’ beyond 70ms after stimulus presentation, challenging hierarchical models. More broadly, we ask what it means for a model component and brain region to correspond: beyond quantifying shared variance, we must consider the functional role in the computation. We also demonstrate that using a DCNN to decode high-level conceptual information from ventral stream produces a general mapping from brain to model activation space, which generalises to novel classes held-out from training data. This suggests future possibilities for brain-machine interface with high-level conceptual information, beyond current designs that interface with the sensorimotor periphery.
Measuring relevant features of the social and physical environment with imagery
The efficacy of images to create quantitative measures of urban perception has been explored in psychology, social science, urban planning and architecture over the last 50 years. The ability to scale these measurements has become possible only in the last decade, due to increased urban surveillance in the form of street view and satellite imagery, and the accessibility of such data. This talk will present a series of projects which make use of imagery and CNNs to predict, measure and interpret the social and physical environments of our cities.
Crowding and the Architecture of the Visual System
Classically, vision is seen as a cascade of local, feedforward computations. This framework has been tremendously successful, inspiring a wide range of ground-breaking findings in neuroscience and computer vision. Recently, feedforward Convolutional Neural Networks (ffCNNs), inspired by this classic framework, have revolutionized computer vision and been adopted as tools in neuroscience. However, despite these successes, there is much more to vision. I will present our work using visual crowding and related psychophysical effects as probes into visual processes that go beyond the classic framework. In crowding, perception of a target deteriorates in clutter. We focus on global aspects of crowding, in which perception of a small target is strongly modulated by the global configuration of elements across the visual field. We show that models based on the classic framework, including ffCNNs, cannot explain these effects for principled reasons and identify recurrent grouping and segmentation as a key missing ingredient. Then, we show that capsule networks, a recent kind of deep learning architecture combining the power of ffCNNs with recurrent grouping and segmentation, naturally explain these effects. We provide psychophysical evidence that humans indeed use a similar recurrent grouping and segmentation strategy in global crowding effects. In crowding, visual elements interfere across space. To study how elements interfere over time, we use the Sequential Metacontrast psychophysical paradigm, in which perception of visual elements depends on elements presented hundreds of milliseconds later. We psychophysically characterize the temporal structure of this interference and propose a simple computational model. Our results support the idea that perception is a discrete process. Together, the results presented here provide stepping-stones towards a fuller understanding of the visual system by suggesting architectural changes needed for more human-like neural computations.
Domain Specificity in the Human Brain: What, Whether, and Why?
The last quarter century has provided extensive evidence that some regions of the human cortex are selectively engaged in processing a single specific domain of information, from faces, places, and bodies to language, music, and other people’s thoughts. This work dovetails with earlier theories in cognitive science highlighting domain specificity in human cognition, development, and evolution. But many questions remain unanswered about even the clearest cases of domain specificity in the brain, the selective engagement of the FFA, PPA, and EBA in the perception of faces, places, and bodies, respectively. First, these claims lack precision, saying little about what is computed and how, and relying on human judgements to decide what counts as a face, place, or body. Second, they provide no account of the reliably varying responses of these regions across different “preferred” images, or across different “nonpreferred” images for each category. Third, the category selectivity of each region is vulnerable to refutation if any of the vast set of as-yet-untested nonpreferred images turns out to produce a stronger response than preferred images for that region. Fourth, and most fundamentally, they provide no account of why, from a computational point of view, brains should exhibit this striking degree of functional specificity in the first place, and why we should have the particular visual specializations we do, for faces, places, and bodies, but not (apparently) for food or snakes. The advent of convolutional neural networks (CNNs) to model visual processing in the ventral pathway has opened up many opportunities to address these long-standing questions in new ways. I will describe ongoing efforts in our lab to harness CNNs to do just that.
Comparing CNNs and the brain: sensitivity to images altered in the frequency domain
Neuromatch 5
Brain-like visual surround suppression in generic CNNs: successes and limitations
Neuromatch 5