Principal Components Analysis
Principal Components Analysis
Experience dependent changes of sensory representation in the olfactory cortex
Sensory representations are typically thought as neuronal activity patterns that encode physical attributes of the outside world. However, increasing evidence is showing that as animals learned the association between a sensory stimulus and its behavioral relevance, stimulus representation in sensory cortical areas can change. In this seminar I will present recent experiments from our lab showing that the activity in the olfactory piriform cortex (PC) of mice encodes not only odor information, but also non-olfactory variables associated with the behavioral task. By developing an associative olfactory learning task, in which animals learn to associate a particular context with an odor and a reward, we were able to record the activity of multiple neurons as the animal runs in a virtual reality corridor. By analyzing the population activity dynamics using Principal Components Analysis, we find different population trajectories evolving through time that can discriminate aspects of different trial types. By using Generalized Linear Models we further dissected the contribution of different sensory and non-sensory variables to the modulation of PC activity. Interestingly, the experiments show that variables related to both sensory and non-sensory aspects of the task (e.g., odor, context, reward, licking, sniffing rate and running speed) differently modulate PC activity, suggesting that the PC adapt odor processing depending on experience and behavior.
Using noise to probe recurrent neural network structure and prune synapses
Many networks in the brain are sparsely connected, and the brain eliminates synapses during development and learning. How could the brain decide which synapses to prune? In a recurrent network, determining the importance of a synapse between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. In the first part of this talk, I will suggest that noise could play a functional role in synaptic pruning, allowing the brain to probe network structure and determine which synapses are redundant. I will introduce a simple, local, unsupervised plasticity rule that either strengthens or prunes synapses using only synaptic weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, this rule provably preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned synapses asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, I will then turn to the problem of extracting structure from neural population data sets using dimensionality reduction methods. I will argue that nonlinear structures naturally arise in neural data and show how these nonlinearities cause linear methods of dimensionality reduction, such as Principal Components Analysis, to fail dramatically in identifying low-dimensional structure.