Solution Space
solution space
Neural circuit parameter variability, robustness, and homeostasis
Neurons and neural circuits can produce stereotyped and reliable output activity on the basis of highly variable cellular, synaptic, and circuit properties. This is crucial for proper nervous system function throughout an animal’s life in the face of growth, perturbations, and molecular turnover. But how can reliable output arise from neurons and synapses whose parameter vary between individuals in a population, and within an individual over time? I will review how a combination of experimental and computational methods can be used to examine how neuron and network function depends on the underlying parameters, such as neuronal membrane conductances and synaptic strengths. Within the high-dimensional parameter space of a neural system, the subset of parameter combinations that produce biologically functional neuron or circuit activity is captured by the notion of a ‘solution space’. I will describe solution space structures determined from electrophysiology data, ion channel expression levels across populations of neurons and animals, and computational parameter space explorations. A key finding centers on experimental and computational evidence for parameter correlations that give structure to solution spaces. Computational modeling suggests that such parameter correlations can be beneficial for constraining neuron and circuit properties to functional regimes, while experimental results indicate that neural circuits may have evolved to implement some of these beneficial parameter correlations at the cellular level. Finally, I will review modeling work and experiments that seek to illuminate how neural systems can homeostatically navigate their parameter spaces to stably remain within their solution space and reliably produce functional output, or to return to their solution space after perturbations that temporarily disrupt proper neuron or network function.
A geometric framework to predict structure from function in neural networks
The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function. However, quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of rectified-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. We then use this analytical characterization to rigorously analyze the solution space geometry and derive certainty conditions guaranteeing a non-zero synapse between neurons.