TopicNeuroscience
Content Overview
11Total items
7ePosters
4Seminars

Latest

SeminarNeuroscience

Learning representations of specifics and generalities over time

Anna Schapiro
University of Pennsylvania
Apr 12, 2024

There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.

SeminarNeuroscienceRecording

Applying Structural Alignment theory to Early Verb Learning

Jane Childers
Trinity University
Feb 2, 2023

Learning verbs is difficult and critical to learning one's native language. Children appear to benefit from seeing multiple events and comparing them to each other, and structural alignment theory provides a good theoretical framework to guide research into how preschool children may be comparing events as they learn new verbs. The talk will include 6 studies of early verb learning that make use of eye-tracking procedures as well as other behavioral (pointing) procedures, and that test key predictions from SA theory including the prediction that seeing similar examples before more varied examples helps observers learn how to compare (progressive alignment) and the prediction that when events have very low alignability with other events, that is one cue that the events should be ignored. Whether or how statistical learning may also be at work will be considered.

SeminarNeuroscienceRecording

Learning the structure and investigating the geometry of complex networks

Robert Peach and Alexis Arnaudon
Imperial College
Sep 25, 2021

Networks are widely used as mathematical models of complex systems across many scientific disciplines, and in particular within neuroscience. In this talk, we introduce two aspects of our collaborative research: (1) machine learning and networks, and (2) graph dimensionality. Machine learning and networks. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and sometimes overlapping) characteristics of a network. We have developed hcga, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. Taking inspiration from hctsa, hcga offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that hcga outperforms other methodologies (including deep learning) on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features, which we exemplify on a dataset of neuronal morphologies images. Graph dimensionality. Dimension is a fundamental property of objects and the space in which they are embedded. Yet ideal notions of dimension, as in Euclidean spaces, do not always translate to physical spaces, which can be constrained by boundaries and distorted by inhomogeneities, or to intrinsically discrete systems such as networks. Deviating from approaches based on fractals, here, we present a new framework to define intrinsic notions of dimension on networks, the relative, local and global dimension. We showcase our method on various physical systems.

SeminarNeuroscience

Sounds Familiar? Statistical Learning of Acoustic Environments

David McAlpine
Macquarie University, Sydney, Australia
Feb 22, 2021
ePosterNeuroscience

Statistical learning yields generalization and naturalistic behaviors in transitive inference

Samuel Lippl, Larry Abbott, Kenneth Kay, Greg Jensen, Vincent Ferrera

COSYNE 2023

ePosterNeuroscience

Pupil dynamics and hippocampal representations reveal fast statistical learning in mice

Adedamola Onih, Abdullah Aziz, Athena Akrami

COSYNE 2025

ePosterNeuroscience

Alpha activity change during implicit visual statistical learning

Szabolcs Sáringer, András Benyhe, Ágnes Fehér, Péter Kaposvári
ePosterNeuroscience

Hippocampus is necessary for implicit statistical learning: Insights from mouse and human pupillometry

Adedamola Onih, Athena Akrami

FENS Forum 2024

ePosterNeuroscience

Network mechanisms for statistical learning and place field formation in the hippocampus

Margaret Lane, Merkourios Simos, James Priestley

FENS Forum 2024

ePosterNeuroscience

Statistical learning in auditory cortex and hippocampus

Xing Xiao, Livia de Hoz

FENS Forum 2024

ePosterNeuroscience

Statistical learning in acute and chronic pain

Jakub Onysk

Neuromatch 5

statistical learning coverage

11 items

ePoster7
Seminar4

Share your knowledge

Know something about statistical learning? Help the community by contributing seminars, talks, or research.

Contribute content
Domain spotlight

Explore how statistical learning research is advancing inside Neuroscience.

Visit domain

Cookies

We use essential cookies to run the site. Analytics cookies are optional and help us improve World Wide. Learn more.