Platform

  • Search
  • Seminars
  • Conferences
  • Jobs

Resources

  • Submit Content
  • About Us

© 2025 World Wide

Open knowledge for all • Started with World Wide Neuro • A 501(c)(3) Non-Profit Organization

Analytics consent required

World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.

Review the Privacy Policy for details about analytics processing.

World Wide
SeminarsConferencesWorkshopsCoursesJobsMapsFeedLibrary
← Back

Centrality Population Level Factors

Back to SeminarsBack
SeminarPast EventNeuroscience

The centrality of population-level factors to network computation is demonstrated by a versatile approach for training spiking networks

Brian DePasquale

Dr

Princeton

Schedule
Wednesday, May 3, 2023

Showing your local timezone

Schedule

Wednesday, May 3, 2023

5:00 PM Europe/Berlin

Host: SNUFA

Seminar location

Seminar location

Not provided

No geocoded details are available for this content yet.

Access Seminar

Meeting Password

$Em4HF

Use this password when joining the live session

Event Information

Format

Past Seminar

Recording

Not available

Host

SNUFA

Duration

30.00 minutes

Seminar location

Seminar location

Not provided

No geocoded details are available for this content yet.

World Wide map

Abstract

Neural activity is often described in terms of population-level factors extracted from the responses of many neurons. Factors provide a lower-dimensional description with the aim of shedding light on network computations. Yet, mechanistically, computations are performed not by continuously valued factors but by interactions among neurons that spike discretely and variably. Models provide a means of bridging these levels of description. We developed a general method for training model networks of spiking neurons by leveraging factors extracted from either data or firing-rate-based networks. In addition to providing a useful model-building framework, this formalism illustrates how reliable and continuously valued factors can arise from seemingly stochastic spiking. Our framework establishes procedures for embedding this property in network models with different levels of realism. The relationship between spikes and factors in such networks provides a foundation for interpreting (and subtly redefining) commonly used quantities such as firing rates.

Topics

FORCE learningartificial neural networksdimensionality reductiondynamical systemsfactor modelsfiring ratesmodel networksmodel-building frameworkmotor systemnetwork computationnetwork trainingneural activityneuron interactionspopulation dynamicspopulation-level factorsrecurrent neural networksspiking networksstochastic spiking

About the Speaker

Brian DePasquale

Dr

Princeton

Contact & Resources

Personal Website

www.princeton.edu/~briandd/

@briandepasquale

Follow on Twitter/X

twitter.com/briandepasquale

Related Seminars

Seminar64% match - Relevant

Rethinking Attention: Dynamic Prioritization

neuro

Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory p

Jan 6, 2025
George Washington University
Seminar64% match - Relevant

The Cognitive Roots of the Problem of Free Will

neuro

Jan 7, 2025
Bielefeld & Amsterdam
Seminar64% match - Relevant

The neural basis of exploration and decision-making in individuals and groups

neuro

Jan 8, 2025
Max Planck Institute of Animal Behaviour, Konstanz
World Wide calendar

World Wide highlights

December 2025 • Syncing the latest schedule.

View full calendar
Awaiting featured picks
Month at a glance

Upcoming highlights