TopicNeuro

representation learning

6 Seminars6 ePosters

Latest

SeminarNeuroscience

The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures

Andrew Saxe
UCL
Apr 14, 2023

What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.

SeminarNeuroscienceRecording

Deriving local synaptic learning rules for efficient representations in networks of spiking neurons

Viola Priesemann
Max Planck Institute for Dynamics and Self-Organization
Nov 2, 2021

How can neural networks learn to efficiently represent complex and high-dimensional inputs via local plasticity mechanisms? Classical models of representation learning assume that input weights are learned via pairwise Hebbian-like plasticity. Here, we show that pairwise Hebbian-like plasticity only works under specific requirements on neural dynamics and input statistics. To overcome these limitations, we derive from first principles a learning scheme based on voltage-dependent synaptic plasticity rules. Here, inhibition learns to locally balance excitatory input in individual dendritic compartments, and thereby can modulate excitatory synaptic plasticity to learn efficient representations. We demonstrate in simulations that this learning scheme works robustly even for complex, high-dimensional and correlated inputs. It also works in the presence of inhibitory transmission delays, where Hebbian-like plasticity typically fails. Our results draw a direct connection between dendritic excitatory-inhibitory balance and voltage-dependent synaptic plasticity as observed in vivo, and suggest that both are crucial for representation learning.

SeminarNeuroscience

Graph Representation Learning and the Hippocampal-Entorhinal Circuit

Kim Stachenfeld
DeepMind
Feb 17, 2021
SeminarNeuroscienceRecording

One Instructional Sequence Fits all? A Conceptual Analysis of the Applicability of Concreteness Fading

Dr Tommi Kokkonen / Prof Lennart Schalk
University of Helsinki / University of Education Schwyz
Feb 11, 2021

According to the concreteness fading approach, instruction should start with concrete representations and progress stepwise to representations that are more idealized. Various researchers have suggested that concreteness fading is a broadly applicable instructional approach. In this talk, we conceptually analyze examples of concreteness fading in mathematics and various science domains. In this analysis, we draw on theories of analogical and relational reasoning and on the literature about learning with multiple representations. Furthermore, we report on an experimental study in which we employed concreteness fading in advanced physics education. The results of the conceptual analysis and the experimental study indicate that concreteness fading may not be as generalizable as has been suggested. The reasons for this limited generalizability are twofold. First, the types of representations and the relations between them differ across different domains. Second, the instructional goals between domains and the subsequent roles of the representations vary.

SeminarNeuroscienceRecording

Cross Domain Generalisation in Humans and Machines

Leonidas Alex Doumas
The University of Edinburgh
Feb 4, 2021

Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.

SeminarNeuroscience

Unsupervised deep learning identifies semantic disentanglement in single inferotemporal neurons

Irina Higgins
Google Deepmind
Jul 15, 2020

Irina is a research scientist at DeepMind, where she works in the Froniers team. Her work aims to bring together insights from the fields of neuroscience and physics to advance general artificial intelligence through improved representation learning. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an undergraduate student in Experimental Psychology at Westminster University, followed by a DPhil at the Oxford Centre for Computational Neuroscience and Artificial Intelligence, where she focused on understanding the computational principles underlying speech processing in the auditory brain. During her DPhil, Irina also worked on developing poker AI, applying machine learning in the finance sector, and working on speech recognition at Google Research."" https://arxiv.org/pdf/2006.14304.pdf

ePosterNeuroscience

Unsupervised representation learning of neuron morphologies

Marissa A. Weis,Timo Lüddecke,Laura Pede,Alexander Ecker

COSYNE 2022

ePosterNeuroscience

Unsupervised representation learning of neuron morphologies

Marissa A. Weis,Timo Lüddecke,Laura Pede,Alexander Ecker

COSYNE 2022

ePosterNeuroscience

Dendritic modulation for multitask representation learning in deep feedforward networks

Willem Wybo, Viet Anh Khoa Tran, Matthias Tsai, Bernd Illing, Jakob Jordan, Walter Senn, Abigail Morrison

COSYNE 2023

ePosterNeuroscience

Individualized representation learning of resting-state fMRI

Kuan Han, Minkyu Choi, Xiaokai Wang, Zhongming Liu

COSYNE 2023

ePosterNeuroscience

Deciphering the wiring rules of a cortical column using representation learning

Oren Richter, Elad Schneidman

COSYNE 2025

ePosterNeuroscience

The role of mixed selectivity and representation learning for compositional generalization

Samuel Lippl, Kimberly Stachenfeld

COSYNE 2025

representation learning coverage

12 items

Seminar6
ePoster6
Domain spotlight

Explore how representation learning research is advancing inside Neuro.

Visit domain