← Back

Supervised Training

Topic spotlight
TopicWorld Wide

supervised training

Discover seminars, jobs, and research tagged with supervised training across World Wide.
5 curated items5 Seminars
Updated about 3 years ago
5 items · supervised training
5 results
SeminarNeuroscienceRecording

Nonlinear computations in spiking neural networks through multiplicative synapses

M. Nardin
IST Austria
Nov 8, 2022

The brain efficiently performs nonlinear computations through its intricate networks of spiking neurons, but how this is done remains elusive. While recurrent spiking networks implementing linear computations can be directly derived and easily understood (e.g., in the spike coding network (SCN) framework), the connectivity required for nonlinear computations can be harder to interpret, as they require additional non-linearities (e.g., dendritic or synaptic) weighted through supervised training. Here we extend the SCN framework to directly implement any polynomial dynamical system. This results in networks requiring multiplicative synapses, which we term the multiplicative spike coding network (mSCN). We demonstrate how the required connectivity for several nonlinear dynamical systems can be directly derived and implemented in mSCNs, without training. We also show how to precisely carry out higher-order polynomials with coupled networks that use only pair-wise multiplicative synapses, and provide expected numbers of connections for each synapse type. Overall, our work provides an alternative method for implementing nonlinear computations in spiking neural networks, while keeping all the attractive features of standard SCNs such as robustness, irregular and sparse firing, and interpretable connectivity. Finally, we discuss the biological plausibility of mSCNs, and how the high accuracy and robustness of the approach may be of interest for neuromorphic computing.

SeminarNeuroscienceRecording

From Machine Learning to Autonomous Intelligence

Yann Le Cun
Meta-FAIR & Meta AI
Oct 18, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable.

SeminarNeuroscience

From Machine Learning to Autonomous Intelligence

Yann LeCun
Meta Fair
Oct 9, 2022

How could machines learn as efficiently as humans and animals? How could machines learn to reason and plan? How could machines learn representations of percepts and action plans at multiple levels of abstraction, enabling them to reason, predict, and plan at multiple time horizons? I will propose a possible path towards autonomous intelligent agents, based on a new modular cognitive architecture and a somewhat new self-supervised training paradigm. The centerpiece of the proposed architecture is a configurable predictive world model that allows the agent to plan. Behavior and learning are driven by a set of differentiable intrinsic cost functions. The world model uses a new type of energy-based model architecture called H-JEPA (Hierarchical Joint Embedding Predictive Architecture). H-JEPA learns hierarchical abstract representations of the world that are simultaneously maximally informative and maximally predictable. The corresponding working paper is available here:https://openreview.net/forum?id=BZ5a1r-kVsf

SeminarNeuroscienceRecording

Multitask performance humans and deep neural networks

Christopher Summerfield
University of Oxford
Nov 24, 2020

Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.

SeminarNeuroscienceRecording

On temporal coding in spiking neural networks with alpha synaptic function

Iulia M. Comsa
Google Research Zürich, Switzerland
Aug 30, 2020

The timing of individual neuronal spikes is essential for biological brains to make fast responses to sensory stimuli. However, conventional artificial neural networks lack the intrinsic temporal coding ability present in biological networks. We propose a spiking neural network model that encodes information in the relative timing of individual neuron spikes. In classification tasks, the output of the network is indicated by the first neuron to spike in the output layer. This temporal coding scheme allows the supervised training of the network with backpropagation, using locally exact derivatives of the postsynaptic spike times with respect to presynaptic spike times. The network operates using a biologically-plausible alpha synaptic transfer function. Additionally, we use trainable synchronisation pulses that provide bias, add flexibility during training and exploit the decay part of the alpha function. We show that such networks can be trained successfully on noisy Boolean logic tasks and on the MNIST dataset encoded in time. The results show that the spiking neural network outperforms comparable spiking models on MNIST and achieves similar quality to fully connected conventional networks with the same architecture. We also find that the spiking network spontaneously discovers two operating regimes, mirroring the accuracy-speed trade-off observed in human decision-making: a slow regime, where a decision is taken after all hidden neurons have spiked and the accuracy is very high, and a fast regime, where a decision is taken very fast but the accuracy is lower. These results demonstrate the computational power of spiking networks with biological characteristics that encode information in the timing of individual neurons. By studying temporal coding in spiking networks, we aim to create building blocks towards energy-efficient and more complex biologically-inspired neural architectures.