← Back

Transfer Learning

Topic spotlight
TopicWorld Wide

transfer learning

Discover seminars, jobs, and research tagged with transfer learning across World Wide.
13 curated items5 Positions4 Seminars4 ePosters
Updated 1 day ago
13 items · transfer learning
13 results
PositionComputer Science

Prof. Dr.-Ing. Marcus Magnor

Technische Universität Braunschweig
Technische Universität Braunschweig, Germany
Dec 5, 2025

The job is a W3 Full Professorship for Artificial Intelligence in interactive Systems at Technische Universität Braunschweig. The role involves expanding the research area of data-driven methods for interactive and intelligent systems at the TU Braunschweig and strengthening the focal points 'Data Science' and 'Reliability' of the Department of Computer Science. The position holder is expected to have a strong background in Computer Science with a focus on Artificial Intelligence/​Machine Learning, specifically in the areas of Dependable AI and Explainable AI. The role also involves teaching, topic-related courses in the areas of Artificial Intelligence and Machine Learning to complement the Bachelor's and Master's degree programs of the Department of Computer Science.

Position

Jun.-Prof. Dr.-Ing. Rania Rayyes

Karlsruhe Institute of Technology (KIT), Institut für Fördertechnik und Logistiksysteme (IFL), InnovationsCampus Mobilität der Zukunft (ICM)
Karlsruhe Institute of Technology (KIT), Gebäude 50.38, Gotthard-Franz-Straße 8, 76131 Karlsruhe
Dec 5, 2025

The main focus of this position is to develop novel AI systems and methods for robot applications: Dexterous robot grasping, Human-robot learning, Transfer learning – efficient online learning. The role offers close cooperation with other institutes, universities, and numerous industrial partners, a self-determined development environment for own research topics with active support for the doctorate research project, flexible working hours, and work in a young, interdisciplinary research team.

SeminarNeuroscience

The Neural Race Reduction: Dynamics of nonlinear representation learning in deep architectures

Andrew Saxe
UCL
Apr 13, 2023

What is the relationship between task, network architecture, and population activity in nonlinear deep networks? I will describe the Gated Deep Linear Network framework, which schematizes how pathways of information flow impact learning dynamics within an architecture. Because of the gating, these networks can compute nonlinear functions of their input. We derive an exact reduction and, for certain cases, exact solutions to the dynamics of learning. The reduction takes the form of a neural race with an implicit bias towards shared representations, which then govern the model’s ability to systematically generalize, multi-task, and transfer. We show how appropriate network architectures can help factorize and abstract knowledge. Together, these results begin to shed light on the links between architecture, learning dynamics and network performance.

SeminarNeuroscience

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

Laura Driscoll
Stanford University
Aug 24, 2022

Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate underlying this structure remains elusive in biological and artificial neural networks. Here we identified an algorithmic neural substrate for compositional computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses of networks revealed learned computational strategies that mirrored the modular subtask structure of the task-set used for training. Dynamical motifs such as attractors, decision boundaries and rotations were reused across different task computations. For example, tasks that required memory of a continuous circular variable repurposed the same ring attractor. We show that dynamical motifs are implemented by clusters of units and are reused across different contexts, allowing for flexibility and generalization of previously learned computation. Lesioning these clusters resulted in modular effects on network performance: a lesion that destroyed one dynamical motif only minimally perturbed the structure of other dynamical motifs. Finally, modular dynamical motifs could be reconfigured for fast transfer learning. After slow initial learning of dynamical motifs, a subsequent faster stage of learning reconfigured motifs to perform novel tasks. This work contributes to a more fundamental understanding of compositional computation underlying flexible general intelligence in neural systems. We present a conceptual framework that establishes dynamical motifs as a fundamental unit of computation, intermediate between the neuron and the network. As more whole brain imaging studies record neural activity from multiple specialized systems simultaneously, the framework of dynamical motifs will guide questions about specialization and generalization across brain regions.

SeminarNeuroscience

In pursuit of a universal, biomimetic iBCI decoder: Exploring the manifold representations of action in the motor cortex

Lee Miller
Northwestern University
May 19, 2022

My group pioneered the development of a novel intracortical brain computer interface (iBCI) that decodes muscle activity (EMG) from signals recorded in the motor cortex of animals. We use these synthetic EMG signals to control Functional Electrical Stimulation (FES), which causes the muscles to contract and thereby restores rudimentary voluntary control of the paralyzed limb. In the past few years, there has been much interest in the fact that information from the millions of neurons active during movement can be reduced to a small number of “latent” signals in a low-dimensional manifold computed from the multiple neuron recordings. These signals can be used to provide a stable prediction of the animal’s behavior over many months-long periods, and they may also provide the means to implement methods of transfer learning across individuals, an application that could be of particular importance for paralyzed human users. We have begun to examine the representation within this latent space, of a broad range of behaviors, including well-learned, stereotyped movements in the lab, and more natural movements in the animal’s home cage, meant to better represent a person’s daily activities. We intend to develop an FES-based iBCI that will restore voluntary movement across a broad range of motor tasks without need for intermittent recalibration. However, the nonlinearities and context dependence within this low-dimensional manifold present significant challenges.

SeminarNeuroscience

Scaffolding up from Social Interactions: A proposal of how social interactions might shape learning across development

Sarah Gerson
Cardiff University
Dec 8, 2021

Social learning and analogical reasoning both provide exponential opportunities for learning. These skills have largely been studied independently, but my future research asks how combining skills across previously independent domains could add up to more than the sum of their parts. Analogical reasoning allows individuals to transfer learning between contexts and opens up infinite opportunities for innovation and knowledge creation. Its origins and development, so far, have largely been studied in purely cognitive domains. Constraining analogical development to non-social domains may mistakenly lead researchers to overlook its early roots and limit ideas about its potential scope. Building a bridge between social learning and analogy could facilitate identification of the origins of analogical reasoning and broaden its far-reaching potential. In this talk, I propose that the early emergence of social learning, its saliency, and its meaningful context for young children provides a springboard for learning. In addition to providing a strong foundation for early analogical reasoning, the social domain provides an avenue for scaling up analogies in order to learn to learn from others via increasingly complex and broad routes.

ePoster

Alternating inference and learning: a thalamocortical model for continual and transfer learning

Ali Hummos & Guangyu Robert Yang

COSYNE 2023

ePoster

Using transfer learning to identify a neural system's algorithm

John Morrison, Benjamin Peters

COSYNE 2025

ePoster

Cortical circuits for goal-directed cross-modal transfer learning

Maëlle Guyoton, Giulio Matteucci, Charlie Foucher, Sami El-Boustani

FENS Forum 2024

ePoster

Transfer Learning from Real to Imagined Motor Actions in ECoG Data

Ozgur Ege Aydogan

Neuromatch 5