World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.
Review the Privacy Policy for details about analytics processing.
Dr
The University of Edinburgh
Showing your local timezone
Schedule
Thursday, February 4, 2021
4:00 PM Europe/London
Recording provided by the organiser.
Domain
Original Event
View sourceHost
Analogical Minds
Duration
60 minutes
Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.
Leonidas Alex Doumas
Dr
The University of Edinburgh
Contact & Resources
neuro
I’m interested in structure-function relationships in neural circuits and behavior, with a focus on motor and somatosensory areas of the mouse’s cortex involved in controlling forelimb movements. In o
neuro
neuro