Neural
neural architectures
Simona Olmi
The Institute for Complex Systems at the National Research Council in Florence (Italy) invites applications for a one year Postdoctoral Scholar in the fields of computational neuroscience, complex networks and nonlinear dynamics. The successful applicant is expected to work closely with a multidisciplinary research team led by Dr. Simona Olmi on problems related to neuroscience. Specific topics of interest include but are not limited to the investigation of biologically realistic large-scale brain activity, the emergence of coupling between neural oscillations in neural architectures, the derivation of neural mass models in presence of short-term synaptic plasticity and/or spike-frequency adaptation as well as applications on brain structural connectivity matrices. Successful candidates are expected to primarily conduct computational and data driven research taking advantage of our international network of experimental collaborators and/or clinical partners.
Neural architectures: what are they good for anyway?
The brain has a highly complex structure in terms of cell types and wiring between different regions. What is it for, if anything? I'll start this talk by asking what might an answer to this question even look like given that we can't run an alternative universe where our brains are structured differently. (Preview: we can do this with models!) I'll then talk about some of our work in two areas: (1) does the modular structure of the brain contribute to specialisation of function? (2) how do different cell types and architectures contribute to multimodal sensory processing?
On temporal coding in spiking neural networks with alpha synaptic function
The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.
Where are the neural architectures? The curse of structural flatness in neural network modelling
Neuromatch 5