TopicNeuroscience

Predictive Coding Light

Latest

SeminarNeuroscience

Predictive Coding Light

Prof. Dr. Jochen Triesch
FIAS Frankfurt Institute for Advanced Studies
Feb 11, 2026

Current machine learning systems consume vastly more energy than biological brains. Neuromorphic systems aim to overcome this difference by mimicking the brain’s information coding via discrete voltage spikes. However, it remains unclear how both artificial and natural networks of spiking neurons can learn energy-efficient information processing strategies. Here we propose Predictive Coding Light (PCL), a recurrent hierarchical spiking neural network for unsupervised representation learning. In contrast to previous predictive coding approaches, PCL does not transmit prediction errors to higher processing stages. Instead, it suppresses the most predictable spikes and transmits a compressed representation of the input. Using only biologically plausible spike-timing based learning rules, PCL reproduces a wealth of findings on information processing in visual cortex and permits strong performance in downstream classification tasks. Overall, PCL offers a new approach to predictive coding and its implementation in natural and artificial spiking neural networks

SeminarNeuroscience

From Spiking Predictive Coding to Learning Abstract Object Representation

Prof. Jochen Triesch
Frankfurt Institute for Advanced Studies
Jun 12, 2025

In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.

Predictive Coding Light coverage

2 items

Seminar2

Share your knowledge

Know something about Predictive Coding Light? Help the community by contributing seminars, talks, or research.

Contribute content
Domain spotlight

Explore how Predictive Coding Light research is advancing inside Neuroscience.

Visit domain

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.