Platform

  • Search
  • Seminars
  • Conferences
  • Jobs

Resources

  • Submit Content
  • About Us

© 2025 World Wide

Open knowledge for all • Started with World Wide Neuro • A 501(c)(3) Non-Profit Organization

Analytics consent required

World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.

Review the Privacy Policy for details about analytics processing.

World Wide
SeminarsConferencesWorkshopsCoursesJobsMapsFeedLibrary
← Back

Transforming Task Representations

Back to SeminarsBack
Seminar✓ Recording AvailableNeuroscience

Transforming task representations

Andrew Lampinen

Dr

DeepMind

Schedule
Thursday, May 13, 2021

Showing your local timezone

Schedule

Thursday, May 13, 2021

5:00 PM Europe/London

Watch recording
Host: Analogical Minds

Seminar location

Seminar location

Not provided

No geocoded details are available for this content yet.

Watch the seminar

Recording provided by the organiser.

Event Information

Format

Recorded Seminar

Recording

Available

Host

Analogical Minds

Duration

60.00 minutes

Seminar location

Seminar location

Not provided

No geocoded details are available for this content yet.

World Wide map

Abstract

Humans can adapt to a novel task on our first try. By contrast, artificial intelligence systems often require immense amounts of data to adapt. In this talk, I will discuss my recent work (https://www.pnas.org/content/117/52/32970) on creating deep learning systems that can adapt on their first try by exploiting relationships between tasks. Specifically, the approach is based on transforming a representation for a known task to produce a representation for the novel task, by inferring and then using a higher order function that captures a relationship between the tasks. This approach can be interpreted as a type of analogical reasoning. I will show that task transformation can allow systems to adapt to novel tasks on their first try in domains ranging from card games, to mathematical objects, to image classification and reinforcement learning. I will discuss the analogical interpretation of this approach, an analogy between levels of abstraction within the model architecture that I refer to as homoiconicity, and what this work might suggest about using deep-learning models to infer analogies more generally.

Topics

analogical reasoninganalogycard gamesdeep learninggeneralizationhigher order functionhomoiconicityimage classificationmathematical objectsneurnal networksreinforcement learningrepresentationtask representation

About the Speaker

Andrew Lampinen

Dr

DeepMind

Contact & Resources

Personal Website

lampinen.github.io

@AndrewLampinen

Follow on Twitter/X

twitter.com/AndrewLampinen

Related Seminars

Seminar64% match - Relevant

Rethinking Attention: Dynamic Prioritization

neuro

Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory p

Jan 6, 2025
George Washington University
Seminar64% match - Relevant

The Cognitive Roots of the Problem of Free Will

neuro

Jan 7, 2025
Bielefeld & Amsterdam
Seminar64% match - Relevant

Memory Colloquium Lecture

neuro

Jan 8, 2025
Keio University, Tokyo
World Wide calendar

World Wide highlights

December 2025 • Syncing the latest schedule.

View full calendar
Awaiting featured picks
Month at a glance

Upcoming highlights