Biological
biological plausibility
Nonlinear computations in spiking neural networks through multiplicative synapses
The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.
Credit Assignment in Neural Networks through Deep Feedback Control
The success of deep learning sparked interest in whether the brain learns by using similar techniques for assigning credit to each synaptic weight for its contribution to the network output. However, the majority of current attempts at biologically-plausible learning methods are either non-local in time, require highly specific connectivity motives, or have no clear link to any known mathematical optimization method. Here, we introduce Deep Feedback Control (DFC), a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment. The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of feedback connectivity patterns. To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing. By combining dynamical system theory with mathematical optimization theory, we provide a strong theoretical foundation for DFC that we corroborate with detailed results on toy experiments and standard computer-vision benchmarks.
Hebbian learning, its inference, and brain oscillation
Despite the recent success of deep learning in artificial intelligence, the lack of biological plausibility and labeled data in natural learning still poses a challenge in understanding biological learning. At the other extreme lies Hebbian learning, the simplest local and unsupervised one, yet considered to be computationally less efficient. In this talk, I would introduce a novel method to infer the form of Hebbian learning from in vivo data. Applying the method to the data obtained from the monkey inferior temporal cortex for the recognition task indicates how Hebbian learning changes the dynamic properties of the circuits and may promote brain oscillation. Notably, recent electrophysiological data observed in rodent V1 showed that the effect of visual experience on direction selectivity was similar to that observed in monkey data and provided strong validation of asymmetric changes of feedforward and recurrent synaptic strengths inferred from monkey data. This may suggest a general learning principle underlying the same computation, such as familiarity detection across different features represented in different brain regions.
Using Nengo and the Neural Engineering Framework to Represent Time and Space
The Neural Engineering Framework (and the associated software tool Nengo) provide a general method for converting algorithms into neural networks with an adjustable level of biological plausibility. I will give an introduction to this approach, and then focus on recent developments that have shown new insights into how brains represent time and space. This will start with the underlying mathematical formulation of ideal methods for representing continuous time and continuous space, then show how implementing these in neural networks can improve Machine Learning tasks, and finally show how the resulting systems compare to temporal and spatial representations in biological brains.