← Back

Representation Learning

Topic spotlight
TopicWorld Wide

representation learning

Discover seminars, jobs, and research tagged with representation learning across World Wide.
15 curated items6 Seminars6 ePosters3 Positions
Updated about 24 hours ago
15 items · representation learning
15 results
PositionComputer Science

N/A

University of Innsbruck
University of Innsbruck, Austria
Dec 5, 2025

The position integrates into an attractive environment of existing activities in artificial intelligence such as machine learning for robotics and computer vision, natural language processing, recommender systems, schedulers, virtual and augmented reality, and digital forensics. The candidate should engage in research and teaching in the general area of artificial intelligence. Examples of possible foci include machine learning for pattern recognition, prediction and decision making, data-driven, adaptive, learning and self-optimizing systems, explainable and transparent AI, representation learning; generative models, neuro-symbolic AI, causality, distributed/decentralized learning, environmentally-friendly, sustainable, data-efficient, privacy-preserving AI, neuromorphic computing and hardware aspects, knowledge representations, reasoning, ontologies. Cooperations with research groups at the Department of Computer Science, the Research Areas and in particular the Digital Science Center of the University as well as with business, industry and international research institutions are expected. The candidate should reinforce or complement existing strengths of the Department of Computer Science.

Position

Ekta Vats

Department of Information Technology, Division of Systems and Control, Beijer Laboratory for Artificial Intelligence Research
Uppsala University, Sweden
Dec 5, 2025

We announce a fully-funded 2 year postdoctoral researcher position in Multimodal Deep Learning at Uppsala University, Sweden. Multimodal Vision-Language models integrate computer vision and natural language processing techniques to process and generate information that combines both visual and textual modalities, enabling a more profound understanding of the content within images and videos. Vision-language models exhibit promising potential, and there are several important research challenges to explore. Effective integration of both modalities (vision and language), and aligning visual and text embeddings into a cohesive embedding space continue to pose significant challenges. In this project, the successful candidate will conduct fundamental research and methods development towards designing efficient multimodal models and exploring their applications in computer vision. We are looking for a candidate with a deep learning background and an interest in working on the subject of vision-language modeling. The application areas of interest will be decided in a dialogue between the candidate and the supervisor, taking into account the candidate's interests and research proposal. The position also offers teaching possibilities up to 20%, in English or Swedish. The selected candidate will work in the Department of Information Technology, Division of Systems and Control, in Ekta Vats’ group and the Beijer Laboratory for Artificial Intelligence Research. The project offers a rich collaborative environment (spanning theoretical ML research together with partners at the SciML group), with participation in leading CV/ML conferences (ICML, NeurIPS, CVPR, ICCV, etc.) being expected.

SeminarNeuroscience

The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures

Andrew Saxe
UCL
Apr 13, 2023

What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.

SeminarNeuroscienceRecording

Deriving local synaptic learning rules for efficient representations in networks of spiking neurons

Viola Priesemann
Max Planck Institute for Dynamics and Self-Organization
Nov 1, 2021

How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under specific requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs. It also works in the presence of inhibitory transmission delays, where Hebbian-like plasticity typically fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.

SeminarNeuroscience

Graph Representation Learning and the Hippocampal-Entorhinal Circuit

Kim Stachenfeld
DeepMind
Feb 16, 2021
SeminarNeuroscienceRecording

One Instructional Sequence Fits all? A Conceptual Analysis of the Applicability of Concreteness Fading

Dr Tommi Kokkonen / Prof Lennart Schalk
University of Helsinki / University of Education Schwyz
Feb 10, 2021

According to the concreteness fading approach, instruction should start with concrete representations and progress stepwise to representations that are more idealized. Various researchers have suggested that concreteness fading is a broadly applicable instructional approach. In this talk, we conceptually analyze examples of concreteness fading in mathematics and various science domains. In this analysis, we draw on theories of analogical and relational reasoning and on the literature about learning with multiple representations. Furthermore, we report on an experimental study in which we employed concreteness fading in advanced physics education. The results of the conceptual analysis and the experimental study indicate that concreteness fading may not be as generalizable as has been suggested. The reasons for this limited generalizability are twofold. First, the types of representations and the relations between them differ across different domains. Second, the instructional goals between domains and the subsequent roles of the representations vary.

SeminarNeuroscienceRecording

Cross Domain Generalisation in Humans and Machines

Leonidas Alex Doumas
The University of Edinburgh
Feb 3, 2021

Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.

SeminarNeuroscience

Unsupervised deep learning identifies semantic disentanglement in single inferotemporal neurons

Irina Higgins
Google Deepmind
Jul 14, 2020

Irina is a research scientist at DeepMind, where she works in the Froniers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Centre for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research."" https://arxiv.org/pdf/2006.14304.pdf

ePoster

Unsupervised representation learning of neuron morphologies

COSYNE 2022

ePoster

Unsupervised representation learning of neuron morphologies

COSYNE 2022

ePoster

Dendritic modulation for multitask representation learning in deep feedforward networks

Willem Wybo, Viet Anh Khoa Tran, Matthias Tsai, Bernd Illing, Jakob Jordan, Walter Senn, Abigail Morrison

COSYNE 2023

ePoster

Individualized representation learning of resting-state fMRI

Kuan Han, Minkyu Choi, Xiaokai Wang, Zhongming Liu

COSYNE 2023

ePoster

Deciphering the wiring rules of a cortical column using representation learning

Oren Richter, Elad Schneidman

COSYNE 2025

ePoster

The role of mixed selectivity and representation learning for compositional generalization

Samuel Lippl, Kimberly Stachenfeld

COSYNE 2025