Platform

  • Search
  • Seminars
  • Conferences
  • Jobs

Resources

  • Submit Content
  • About Us

© 2025 World Wide

Open knowledge for all • Started with World Wide Neuro • A 501(c)(3) Non-Profit Organization

Analytics consent required

World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.

Review the Privacy Policy for details about analytics processing.

World Wide
SeminarsConferencesWorkshopsCoursesJobsMapsFeedLibrary
Back to SeminarsBack
Seminar✓ Recording AvailableNeuroscience

Learning static and dynamic mappings with local self-supervised plasticity

Pantelis Vafeidis

California Institute of Technology

Schedule
Wednesday, September 7, 2022

Showing your local timezone

Schedule

Wednesday, September 7, 2022

6:00 PM Europe/Berlin

Watch recording
Host: WWNeuRise

Watch the seminar

Recording provided by the organiser.

Event Information

Domain

Neuroscience

Original Event

View source

Host

WWNeuRise

Duration

35 minutes

Abstract

Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.

Topics

animal navigationconnectivityhebb-like learningneural activity patternsoptogenetic stimulationpredictive learningquasi-continuous attractorreward prediction errorsself-supervised learning

About the Speaker

Pantelis Vafeidis

California Institute of Technology

Contact & Resources

@vafidisp

Follow on Twitter/X

twitter.com/vafidisp

Related Seminars

Seminar60%

Pancreatic Opioids Regulate Ingestive and Metabolic Phenotypes

neuro

Jan 12, 2025
Washington University in St. Louis
Seminar60%

Exploration and Exploitation in Human Joint Decisions

neuro

Jan 12, 2025
Munich
Seminar60%

The Role of GPCR Family Mrgprs in Itch, Pain, and Innate Immunity

neuro

Jan 12, 2025
Johns Hopkins University
January 2026
Full calendar →