Continual Learning
continual learning
Constantine Dovrolis
The Cyprus Institute invites applications for a Post-Doctoral Fellow to pursue research in Machine Learning. The successful candidate will be actively engaged in cutting-edge research in terms of core problems in ML and AI such as developing efficient and interpretable deep nets, continual learning, neuro-inspired ML, self-supervised learning, and other cutting-edge topics. The candidate should have deep understanding of machine learning fundamentals (e.g., linear algebra, probability theory, optimization) as well as broad knowledge of the state-of-the-art in AI and machine and learning. Additionally, the candidate should have extensive experience with ML programming frameworks (e.g., PyTorch). The candidate will be working primarily with two PIs: Prof. Constantine Dovrolis and Prof. Mihalis Nicolaou. The appointment is for a period of 2 years, with the option of renewal subject to performance and the availability of funds.
Friedemann Zenke
The position involves conducting research in computational neuroscience and bio-inspired machine intelligence, writing research articles and presenting them at international conferences, publishing in neuroscience journals and machine learning venues such as ICML, NeurIPS, ICLR, etc., and interacting and collaborating with experimental neuroscience groups or neuromorphic hardware developers nationally and internationally.
Thomas Krak
The Uncertainty in Artificial Intelligence (UAI) group is looking for a highly motivated and skilled PhD candidate to work in the area of probabilistic machine learning. The position is fully funded for a term of four years. The research direction will be determined together with the successful candidate and in line with the NWO Perspectief Project Personalised Care in Oncology (www.personalisedcareinoncology.nl). The research topics may include, but are not restricted to: Probabilistic graphical models (Markov, Bayesian, credal networks), Causality: Theory and application, Cautious AI, including imprecise probabilities, Robust stochastic processes, Tractable models and decision-making, Online/continual learning with evolving data.
Meta-learning synaptic plasticity and memory addressing for continual familiarity detection
Over the course of a lifetime, we process a continual stream of information. Extracted from this stream, memories must be efficiently encoded and stored in an addressable manner for retrieval. To explore potential mechanisms, we consider a familiarity detection task where a subject reports whether an image has been previously encountered. We design a feedforward network endowed with synaptic plasticity and an addressing matrix, meta-learned to optimize familiarity detection over long intervals. We find that anti-Hebbian plasticity leads to better performance than Hebbian and replicates experimental results such as repetition suppression. A combinatorial addressing function emerges, selecting a unique neuron as an index into the synaptic memory matrix for storage or retrieval. Unlike previous models, this network operates continuously, and generalizes to intervals it has not been trained on. Our work suggests a biologically plausible mechanism for continual learning, and demonstrates an effective application of machine learning for neuroscience discovery.
Edge Computing using Spiking Neural Networks
Deep learning has made tremendous progress in the last year but it's high computational and memory requirements impose challenges in using deep learning on edge devices. There has been some progress in lowering memory requirements of deep neural networks (for instance, use of half-precision) but there has been minimal effort in developing alternative efficient computational paradigms. Inspired by the brain, Spiking Neural Networks (SNN) provide an energy-efficient alternative to conventional rate-based neural networks. However, SNN architectures that employ the traditional feedforward and feedback pass do not fully exploit the asynchronous event-based processing paradigm of SNNs. In the first part of my talk, I will present my work on predictive coding which offers a fundamentally different approach to developing neural networks that are particularly suitable for event-based processing. In the second part of my talk, I will present our work on development of approaches for SNNs that target specific problems like low response latency and continual learning. References Dora, S., Bohte, S. M., & Pennartz, C. (2021). Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy. Frontiers in Computational Neuroscience, 65. Saranirad, V., McGinnity, T. M., Dora, S., & Coyle, D. (2021, July). DoB-SNN: A New Neuron Assembly-Inspired Spiking Neural Network for Pattern Classification. In 2021 International Joint Conference on Neural Networks (IJCNN) (pp. 1-6). IEEE. Machingal, P., Thousif, M., Dora, S., Sundaram, S., Meng, Q. (2021). A Cross Entropy Loss for Spiking Neural Networks. Expert Systems with Applications (under review).
Multitask performance humans and deep neural networks
Humans and other primates exhibit rich and versatile behaviour, switching nimbly between tasks as the environmental context requires. I will discuss the neural coding patterns that make this possible in humans and deep networks. First, using deep network simulations, I will characterise two distinct solutions to task acquisition (“lazy” and “rich” learning) which trade off learning speed for robustness, and depend on the initial weights scale and network sparsity. I will chart the predictions of these two schemes for a context-dependent decision-making task, showing that the rich solution is to project task representations onto orthogonal planes on a low-dimensional embedding space. Using behavioural testing and functional neuroimaging in humans, we observe BOLD signals in human prefrontal cortex whose dimensionality and neural geometry are consistent with the rich learning regime. Next, I will discuss the problem of continual learning, showing that behaviourally, humans (unlike vanilla neural networks) learn more effectively when conditions are blocked than interleaved. I will show how this counterintuitive pattern of behaviour can be recreated in neural networks by assuming that information is normalised and temporally clustered (via Hebbian learning) alongside supervised training. Together, this work offers a picture of how humans learn to partition knowledge in the service of structured behaviour, and offers a roadmap for building neural networks that adopt similar principles in the service of multitask learning. This is work with Andrew Saxe, Timo Flesch, David Nagy, and others.
Synthesizing Machine Intelligence in Neuromorphic Computers with Differentiable Programming
The potential of machine learning and deep learning to advance artificial intelligence is driving a quest to build dedicated computers, such as neuromorphic hardware that emulate the biological processes of the brain. While the hardware technologies already exist, their application to real-world tasks is hindered by the lack of suitable programming methods. Advances at the interface of neural computation and machine learning showed that key aspects of deep learning models and tools can be transferred to biologically plausible neural circuits. Building on these advances, I will show that differentiable programming can address many challenges of programming spiking neural networks for solving real-world tasks, and help devise novel continual and local learning algorithms. In turn, these new algorithms pave the road towards systematically synthesizing machine intelligence in neuromorphic hardware without detailed knowledge of the hardware circuits.
Continual learning using dendritic modulations on view-invariant feedforward weights
Bernstein Conference 2024
Evaluating Memory Behavior in Continual Learning using the Posterior in a Binary Bayesian Network
Bernstein Conference 2024
A Study of a biologically plausible combination of Sparsity, Weight Imprinting and Forward Inhibition in Continual Learning
Bernstein Conference 2024
Dissecting the Factors of Metaplasticity with Meta-Continual Learning
COSYNE 2022
Hippocampal networks support continual learning and generalisation
COSYNE 2022
Hippocampal networks support continual learning and generalisation
COSYNE 2022
Compositional inference in the continual learning mouse playground
COSYNE 2025
Metrics of Task Relations Predict Continual Learning Performance
COSYNE 2025
A neural network model of continual learning through closed-loop interaction with the environment
COSYNE 2025
Probing the dynamics of neural representations that support generalization under continual learning
COSYNE 2025