Platform

  • Search
  • Seminars
  • Conferences
  • Jobs

Resources

  • Submit Content
  • About Us

© 2025 World Wide

Open knowledge for all • Started with World Wide Neuro • A 501(c)(3) Non-Profit Organization

Analytics consent required

World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.

Review the Privacy Policy for details about analytics processing.

World Wide
SeminarsConferencesWorkshopsCoursesJobsMapsFeedLibrary
Back to SeminarsBack
SeminarPast EventPsychology

Memory for Latent Representations: An Account of Working Memory that Builds on Visual Knowledge for Efficient and Detailed Visual Representations

Brad Wyble

Dr

Penn State University

Schedule
Wednesday, July 7, 2021

Showing your local timezone

Schedule

Wednesday, July 7, 2021

3:00 AM Canada/Atlantic

Host: Distributed WM Series

Access Seminar

Event Information

Domain

Psychology

Original Event

View source

Host

Distributed WM Series

Duration

60 minutes

Abstract

Visual knowledge obtained from our lifelong experience of the world plays a critical role in our ability to build short-term memories. We propose a mechanistic explanation of how working memory (WM) representations are built from the latent representations of visual knowledge and can then be reconstructed. The proposed model, Memory for Latent Representations (MLR), features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that binds items’ attributes to tokenized representations. The simulation results revealed that shape information for stimuli that the model was trained on, can be encoded and retrieved efficiently from latents in higher levels of the visual hierarchy. On the other hand, novel patterns that are completely outside the training set can be stored from a single exposure using only latents from early layers of the visual system. Moreover, the representation of a given stimulus can have multiple codes, representing specific visual features such as shape or color, in addition to categorical information. Finally, we validated our model by testing a series of predictions against behavioral results acquired from WM tasks. The model provides a compelling demonstration of visual knowledge yielding the formation of compact visual representation for efficient memory encoding.

Topics

MNISTbehavioural resultsdeep learninglatent representationsmemory encodingneural networksstimulus encodingtokenized representationsvariational autoencodervisual hierarchyvisual knowledgeworking memory

About the Speaker

Brad Wyble

Dr

Penn State University

Contact & Resources

Personal Website

scholar.google.com/citations

@bradpwyble

Follow on Twitter/X

twitter.com/bradpwyble

Related Seminars

Seminar60%

Neural makers of lapses in attention during sustained ‘real-world’ task performance

psychology

Lapses in attention are ubiquitous and, unfortunately, the cause of many tragic accidents. One potential solution may be to develop assistance systems which can use objective, physiological signals to

Feb 11, 2025
University of Stirling
Seminar60%

PhenoSign - Molecular Dynamic Insights

psychology

Do You Know Your Blood Glucose Level? You Probably Should! A single measurement is not enough to truly understand your metabolic health. Blood glucose levels fluctuate dynamically, and meaningful ins

Feb 25, 2025
PhenoSign
Seminar60%

A Novel Neurophysiological Approach to Assessing Distractibility within the General Population

psychology

Vulnerability to distraction varies across the general population and significantly affects one’s capacity to stay focused on and successfully complete the task at hand, whether at school, on the road

Mar 4, 2025
University of Geneva
January 2026
Full calendar →