TopicNeuro

continual learning

10 ePosters4 Seminars

Latest

SeminarNeuroscienceRecording

Edge Computing using Spiking Neural Networks

Shirin Dora
Loughborough University
Nov 5, 2021

Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).

SeminarNeuroscienceRecording

Multitask performance humans and deep neural networks

Christopher Summerfield
University of Oxford
Nov 25, 2020

Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.

SeminarNeuroscienceRecording

Synthesizing Machine Intelligence in Neuromorphic Computers with Differentiable Programming

Emre Neftci
University of California Irvine
Aug 31, 2020

The potential of machine learning and deep learning to advance artificial intelligence is driving a quest to build dedicated computers, such as neuromorphic hardware that emulate the biological processes of the brain. While the hardware technologies already exist, their application to real-world tasks is hindered by the lack of suitable programming methods. Advances at the interface of neural computation and machine learning showed that key aspects of deep learning models and tools can be transferred to biologically plausible neural circuits. Building on these advances, I will show that differentiable programming can address many challenges of programming spiking neural networks for solving real-world tasks, and help devise novel continual and local learning algorithms. In turn, these new algorithms pave the road towards systematically synthesizing machine intelligence in neuromorphic hardware without detailed knowledge of the hardware circuits.

ePosterNeuroscience

Continual learning using dendritic modulations on view-invariant feedforward weights

Viet Anh Khoa Tran, Emre Neftci, Willem Wybo

Bernstein Conference 2024

ePosterNeuroscience

Evaluating Memory Behavior in Continual Learning using the Posterior in a Binary Bayesian Network

Akshay Bedhotiya, Emre Neftci

Bernstein Conference 2024

ePosterNeuroscience

A Study of a biologically plausible combination of Sparsity, Weight Imprinting and Forward Inhibition in Continual Learning

Golzar Atefi, Justus Westerhof, Felix Gers, Erik Rodner

Bernstein Conference 2024

ePosterNeuroscience

Dissecting the Factors of Metaplasticity with Meta-Continual Learning

Hin Wai Lui,Emre Neftci

COSYNE 2022

ePosterNeuroscience

Hippocampal networks support continual learning and generalisation

Samia Mohinta,Dabal Pedamonti,Martin Dimitrov,Hugo Malagon-Vina,Stephane Ciocchi,Rui Ponte Costa

COSYNE 2022

ePosterNeuroscience

Hippocampal networks support continual learning and generalisation

Samia Mohinta,Dabal Pedamonti,Martin Dimitrov,Hugo Malagon-Vina,Stephane Ciocchi,Rui Ponte Costa

COSYNE 2022

ePosterNeuroscience

Compositional inference in the continual learning mouse playground

Aneesh Bal, Andrea Santi, Cecelia Shuai, Samantha Soto, Joshua Vogelstein, Patricia Janak, Kishore V. Kuchibhotla

COSYNE 2025

ePosterNeuroscience

Metrics of Task Relations Predict Continual Learning Performance

Haozhe Shan, Qianyi Li, Haim Sompolinsky

COSYNE 2025

ePosterNeuroscience

A neural network model of continual learning through closed-loop interaction with the environment

Alexander Rivkind, Daniel Wolpert, Guillaume Hennequin, Mate Lengyel

COSYNE 2025

ePosterNeuroscience

Probing the dynamics of neural representations that support generalization under continual learning

Daniel Kimmel, Kimberly Stachenfeld, Nikolaus Kriegeskorte, Stefano Fusi, C Daniel Salzman, Daphna Shohamy

COSYNE 2025

continual learning coverage

14 items

ePoster10
Seminar4
Domain spotlight

Explore how continual learning research is advancing inside Neuro.

Visit domain