World Wide relies on analytics signals to operate securely and keep research services available. Accept to continue, or leave the site.
Review the Privacy Policy for details about analytics processing.
Dr
The University of Edinburgh
Showing your local timezone
Schedule
Thursday, February 4, 2021
4:00 PM Europe/London
Seminar location
No geocoded details are available for this content yet.
Recording provided by the organiser.
Format
Recorded Seminar
Recording
Available
Host
Analogical Minds
Duration
60.00 minutes
Seminar location
No geocoded details are available for this content yet.
Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.
Leonidas Alex Doumas
Dr
The University of Edinburgh
Contact & Resources
neuro
Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory p
neuro
neuro