← Back

Unsupervised Learning

Topic spotlight
TopicWorld Wide

unsupervised learning

Discover seminars, jobs, and research tagged with unsupervised learning across World Wide.
13 curated items7 Seminars4 Positions2 ePosters
Updated 1 day ago
13 items · unsupervised learning
13 results
Position

Max Garagnani

Department of Computing, Goldsmiths, University of London
Goldsmiths, University of London, Lewisham Way, New Cross, London SE14 6NW, UK
Dec 5, 2025

The project involves implementing a brain-realistic neurocomputational model able to exhibit the spontaneous emergence of cognitive function from a uniform neural substrate, as a result of unsupervised, biologically realistic learning. Specifically, it will focus on modelling the emergence of unexpected (i.e., non stimulus-driven) action decisions using neo-Hebbian reinforcement learning. The final deliverable will be an artificial brain-like cognitive architecture able to learn to act as humans do when driven by intrinsic motivation and spontaneous, exploratory behaviour.

Position

Prof. Joschka Boedecker

Neurorobotics Lab, University of Freiburg, ELLIS Unit Freiburg, Department of Computer Science, BrainLinks-BrainTools Center, Collaborative Research Institute Intelligent Oncology (CRIION)
University of Freiburg, Germany
Dec 5, 2025

Full-time PhD positions on planning and learning for automated driving at the Neurorobotics Lab, University of Freiburg, Germany. The project involves working in a team with excellent peers in a larger project with an industry partner.

Position

Ioan Marius BILASCO

CRIStAL laboratory (University of Lille, CNRS), MIS Laboratory (Amiens France), IRCICA (CNRS)
Lille, France
Dec 5, 2025

The FOX team from the CRIStAL laboratory (UMR CNRS), Lille France and the PR team from the MIS Laboratory, Amiens France are looking to recruit a joint PhD student for a project titled 'EventSpike - Asynchronous computer vision from event cameras'. The project aims to develop new models of spiking neural networks (SNN) capable of directly processing visual information in the form of spike trains for applications in autonomous driving. The thesis will focus on weakly supervised learning methods based on spiking learning mechanisms to exploit the flow of impulses generated by an event camera.

Position

Ioan Marius Bilasco

CRIStAL laboratory, University of Lille, CNRS
Lille, France
Dec 5, 2025

The FOX team of the CRIStAL laboratory (UMR CNRS), Lille, France, and the PR team of the MIS Laboratory, Amiens, France, are looking to recruit a post-doc starting as soon as possible and a joint PhD student starting in October 2025 in the field of asynchronous computer vision from event cameras. The main objective is to develop new models of spiking neural networks (SNN) capable of directly processing visual information in the form of spike trains. The proposed models must be validated experimentally on dynamic vision databases, following standard protocols and best practices. The PhD candidate will be funded for 3 years (grant application pending) and is expected to defend his/her thesis and graduate by the end of the contract. The monthly gross salary is around 2000€, including benefits. The post-doc will be hired for 18 months starting from March 2025 (this is a fully-funded position). The monthly gross salary is around 2500-3000€, including benefits.

SeminarNeuroscience

From Spiking Predictive Coding to Learning Abstract Object Representation

Prof. Jochen Triesch
Frankfurt Institute for Advanced Studies
Jun 11, 2025

In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.

SeminarNeuroscience

Learning to see stuff

Roland W. Fleming
Giessen University
Mar 12, 2023

Humans are very good at visually recognizing materials and inferring their properties. Without touching surfaces, we can usually tell what they would feel like, and we enjoy vivid visual intuitions about how they typically behave. This is impressive because the retinal image that the visual system receives as input is the result of complex interactions between many physical processes. Somehow the brain has to disentangle these different factors. I will present some recent work in which we show that an unsupervised neural network trained on images of surfaces spontaneously learns to disentangle reflectance, lighting and shape. However, the disentanglement is not perfect, and we find that as a result the network not only predicts the broad successes of human gloss perception, but also the specific pattern of errors that humans exhibit on an image-by-image basis. I will argue this has important implications for thinking about appearance and vision more broadly.

SeminarNeuroscienceRecording

Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity

Thomas Limbacher
TU Graz
Nov 8, 2022

Memory is a key component of biological neural systems that enables the retention of information over a huge range of temporal scales, ranging from hundreds of milliseconds up to years. While Hebbian plasticity is believed to play a pivotal role in biological memory, it has so far been analyzed mostly in the context of pattern completion and unsupervised learning. Here, we propose that Hebbian plasticity is fundamental for computations in biological neural systems. We introduce a novel spiking neural network (SNN) architecture that is enriched by Hebbian synaptic plasticity. We experimentally show that our memory-equipped SNN model outperforms state-of-the-art deep learning mechanisms in a sequential pattern-memorization task, as well as demonstrate superior out-of-distribution generalization capabilities compared to these models. We further show that our model can be successfully applied to one-shot learning and classification of handwritten characters, improving over the state-of-the-art SNN model. We also demonstrate the capability of our model to learn associations for audio to image synthesis from spoken and handwritten digits. Our SNN model further presents a novel solution to a variety of cognitive question answering tasks from a standard benchmark, achieving comparable performance to both memory-augmented ANN and SNN-based state-of-the-art solutions to this problem. Finally we demonstrate that our model is able to learn from rewards on an episodic reinforcement learning task and attain near-optimal strategy on a memory-based card game. Hence, our results show that Hebbian enrichment renders spiking neural networks surprisingly versatile in terms of their computational as well as learning capabilities. Since local Hebbian plasticity can easily be implemented in neuromorphic hardware, this also suggests that powerful cognitive neuromorphic systems can be build based on this principle.

SeminarNeuroscienceRecording

Mouse visual cortex as a limited resource system that self-learns an ecologically-general representation

Aran Nayebi
MIT
Nov 1, 2022

Studies of the mouse visual system have revealed a variety of visual brain areas in a roughly hierarchical arrangement, together with a multitude of behavioral capacities, ranging from stimulus-reward associations, to goal-directed navigation, and object-centric discriminations. However, an overall understanding of the mouse’s visual cortex organization, and how this organization supports visual behaviors, remains unknown. Here, we take a computational approach to help address these questions, providing a high-fidelity quantitative model of mouse visual cortex. By analyzing factors contributing to model fidelity, we identified key principles underlying the organization of mouse visual cortex. Structurally, we find that comparatively low-resolution and shallow structure were both important for model correctness. Functionally, we find that models trained with task-agnostic, unsupervised objective functions, based on the concept of contrastive embeddings were substantially better than models trained with supervised objectives. Finally, the unsupervised objective builds a general-purpose visual representation that enables the system to achieve better transfer on out-of-distribution visual, scene understanding and reward-based navigation tasks. Our results suggest that mouse visual cortex is a low-resolution, shallow network that makes best use of the mouse’s limited resources to create a light-weight, general-purpose visual system – in contrast to the deep, high-resolution, and more task-specific visual system of primates.

SeminarNeuroscience

Finding needles in the neural haystack: unsupervised analyses of noisy data

Marine Schimel & Kris Jensen
University of Cambridge, Department of Engineering
Nov 30, 2021

In modern neuroscience, we often want to extract information from recordings of many neurons in the brain. Unfortunately, the activity of individual neurons is very noisy, making it difficult to relate to cognition and behavior. Thankfully, we can use the correlations across time and neurons to denoise the data we record. In particular, using recent advances in machine learning, we can build models which harness this structure in the data to extract more interpretable signals. In this talk, we present two such methods as well as examples of how they can help us gain further insights into the neural underpinnings of behavior.

SeminarNeuroscienceRecording

STDP and the transfer of rhythmic signals in the brain

Maoz Shamir
Ben Gurion University
Mar 9, 2021

Rhythmic activity in the brain has been reported in relation to a wide range of cognitive processes. Changes in the rhythmic activity have been related to pathological states. These observations raise the question of the origin of these rhythms: can the mechanisms responsible for generation of these rhythms and that allow the propagation of the rhythmic signal be acquired via a process of learning? In my talk I will focus on spike timing dependent plasticity (STDP) and examine under what conditions this unsupervised learning rule can facilitate the propagation of rhythmic activity downstream in the central nervous system. Next, the I will apply the theory of STDP to the whisker system and demonstrate how STDP can shape the distribution of preferred phases of firing in a downstream population. Interestingly, in both these cases STDP dynamics does not relax to a fixed-point solution, rather the synaptic weights remain dynamic. Nevertheless, STDP allows for the system to retain its functionality in the face of continuous remodeling of the entire synaptic population.

SeminarNeuroscience

Machine reasoning in histopathologic image analysis

Phedias Diamandis
University of Toronto
Jul 8, 2020

Deep learning is an emerging computational approach inspired by the human brain’s neural connectivity that has transformed machine-based image analysis. By using histopathology as a model of an expert-level pattern recognition exercise, we explore the ability for humans to teach machines to learn and mimic image-recognition and decision making. Moreover, these models also allow exploration into the ability for computers to independently learn salient histological patterns and complex ontological relationships that parallel biological and expert knowledge without the need for explicit direction or supervision. Deciphering the overlap between human and unsupervised machine reasoning may aid in eliminating biases and improving automation and accountability for artificial intelligence-assisted vision tasks and decision-making. Aleksandar Ivanov Title:

ePoster

Robust unsupervised learning of spike patterns with optimal transport theory

Antoine Grimaldi, Matthieu Gilson, Laurent Perrinet, Andrea Alamia, Boris Sotomayor-Gomez, Martin Vinck

COSYNE 2025

ePoster

Serotonergic activity in the dorsal raphe nucleus through the lens of unsupervised learning

Felix Hubert, Solene Sautory, Stefan Hajduk, Leopoldo Petreanu, Alexandre Pouget, Zach Mainen

COSYNE 2025