Neuronal Tuning
neuronal tuning
NMC4 Short Talk: Hypothesis-neutral response-optimized models of higher-order visual cortex reveal strong semantic selectivity
Modeling neural responses to naturalistic stimuli has been instrumental in advancing our understanding of the visual system. Dominant computational modeling efforts in this direction have been deeply rooted in preconceived hypotheses. In contrast, hypothesis-neutral computational methodologies with minimal apriorism which bring neuroscience data directly to bear on the model development process are likely to be much more flexible and effective in modeling and understanding tuning properties throughout the visual system. In this study, we develop a hypothesis-neutral approach and characterize response selectivity in the human visual cortex exhaustively and systematically via response-optimized deep neural network models. First, we leverage the unprecedented scale and quality of the recently released Natural Scenes Dataset to constrain parametrized neural models of higher-order visual systems and achieve novel predictive precision, in some cases, significantly outperforming the predictive success of state-of-the-art task-optimized models. Next, we ask what kinds of functional properties emerge spontaneously in these response-optimized models? We examine trained networks through structural ( feature visualizations) as well as functional analysis (feature verbalizations) by running `virtual' fMRI experiments on large-scale probe datasets. Strikingly, despite no category-level supervision, since the models are solely optimized for brain response prediction from scratch, the units in the networks after optimization act as detectors for semantic concepts like `faces' or `words', thereby providing one of the strongest evidences for categorical selectivity in these visual areas. The observed selectivity in model neurons raises another question: are the category-selective units simply functioning as detectors for their preferred category or are they a by-product of a non-category-specific visual processing mechanism? To investigate this, we create selective deprivations in the visual diet of these response-optimized networks and study semantic selectivity in the resulting `deprived' networks, thereby also shedding light on the role of specific visual experiences in shaping neuronal tuning. Together with this new class of data-driven models and novel model interpretability techniques, our study illustrates that DNN models of visual cortex need not be conceived as obscure models with limited explanatory power, rather as powerful, unifying tools for probing the nature of representations and computations in the brain.
How dendrites help solve biological and machine learning problems
Dendrites are thin processes that extend from the cell body of neurons, the main computing units of the brain. The role of dendrites in complex brain functions has been investigated for several decades, yet their direct involvement in key behaviors such as for example sensory perception has only recently been established. In my presentation I will discuss how computational modelling has helped us illuminate dendritic function. I will present the main findings of a number of projects in lab dealing with dendritic nonlinearities in excitatory and inhibitory and their consequences on neuronal tuning and memory formation, the role of dendrites in solving nonlinear problems in human neurons and recent efforts to advance machine learning algorithms by adopting dendritic features.
Neural coding in the auditory cortex - "Emergent Scientists Seminar Series
Dr Jennifer Lawlor Title: Tracking changes in complex auditory scenes along the cortical pathway Complex acoustic environments, such as a busy street, are characterised by their everchanging dynamics. Despite their complexity, listeners can readily tease apart relevant changes from irrelevant variations. This requires continuously tracking the appropriate sensory evidence while discarding noisy acoustic variations. Despite the apparent simplicity of this perceptual phenomenon, the neural basis of the extraction of relevant information in complex continuous streams for goal-directed behavior is currently not well understood. As a minimalistic model for change detection in complex auditory environments, we designed broad-range tone clouds whose first-order statistics change at a random time. Subjects (humans or ferrets) were trained to detect these changes.They were faced with the dual-task of estimating the baseline statistics and detecting a potential change in those statistics at any moment. To characterize the extraction and encoding of relevant sensory information along the cortical hierarchy, we first recorded the brain electrical activity of human subjects engaged in this task using electroencephalography. Human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. To further this investigation, we performed a series of electrophysiological recordings in the primary auditory cortex (A1), secondary auditory cortex (PEG) and frontal cortex (FC) of the fully trained behaving ferret. A1 neurons exhibited strong onset responses and change-related discharges specific to neuronal tuning. PEG population showed reduced onset-related responses, but more categorical change-related modulations. Finally, a subset of FC neurons (dlPFC/premotor) presented a generalized response to all change-related events only during behavior. We show using a Generalized Linear Model (GLM) that the same subpopulation in FC encodes sensory and decision signals, suggesting that FC neurons could operate conversion of sensory evidence to perceptual decision. All together, these area-specific responses suggest a behavior-dependent mechanism of sensory extraction and generalization of task-relevant event. Aleksandar Ivanov Title: How does the auditory system adapt to different environments: A song of echoes and adaptation
Geometry of Neural Computation Unifies Working Memory and Planning
Cognitive tasks typically require the integration of working memory, contextual processing, and planning to be carried out in close coordination. However, these computations are typically studied within neuroscience as independent modular processes in the brain. In this talk I will present an alternative view, that neural representations of mappings between expected stimuli and contingent goal actions can unify working memory and planning computations. We term these stored maps contingency representations. We developed a "conditional delayed logic" task capable of disambiguating the types of representations used during performance of delay tasks. Human behaviour in this task is consistent with the contingency representation, and not with traditional sensory models of working memory. In task-optimized artificial recurrent neural network models, we investigated the representational geometry and dynamical circuit mechanisms supporting contingency-based computation, and show how contingency representation explains salient observations of neuronal tuning properties in prefrontal cortex. Finally, our theory generates novel and falsifiable predictions for single-unit and population neural recordings.