← Back

Knowledge

Topic spotlight
TopicWorld Wide

knowledge representation

Discover seminars, jobs, and research tagged with knowledge representation across World Wide.
4 curated items2 Positions2 Seminars
Updated 1 day ago
4 items · knowledge representation
4 results
PositionArtificial Intelligence

Dr. Robert Legenstein

Graz University of Technology
Austria
Dec 5, 2025

For the recently established Cluster of Excellence CoE Bilateral Artificial Intelligence (BILAI), funded by the Austrian Science Fund (FWF), we are looking for more than 50 PhD students and 10 Post-Doc researchers (m/f/d) to join our team at one of the six leading research institutions across Austria. In BILAI, major Austrian players in Artificial Intelligence (AI) are teaming up to work towards Broad AI. As opposed to Narrow AI, which is characterized by task-specific skills, Broad AI seeks to address a wide array of problems, rather than being limited to a single task or domain. To develop its foundations, BILAI employs a Bilateral AI approach, effectively combining sub-symbolic AI (neural networks and machine learning) with symbolic AI (logic, knowledge representation, and reasoning) in various ways. Harnessing the full potential of both symbolic and sub-symbolic approaches can open new avenues for AI, enhancing its ability to solve novel problems, adapt to diverse environments, improve reasoning skills, and increase efficiency in computation and data use. These key features enable a broad range of applications for Broad AI, from drug development and medicine to planning and scheduling, autonomous traffic management, and recommendation systems. Prioritizing fairness, transparency, and explainability, the development of Broad AI is crucial for addressing ethical concerns and ensuring a positive impact on society. The research team is committed to cross-disciplinary work in order to provide theory and models for future AI and deployment to applications.

SeminarNeuroscienceRecording

Cross Domain Generalisation in Humans and Machines

Leonidas Alex Doumas
The University of Edinburgh
Feb 3, 2021

Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.

SeminarNeuroscienceRecording

Evaluating different facets of category status for promoting spontaneous transfer

Sean Snoddy
Binghamton University
Nov 16, 2020

Existing accounts of analogical transfer highlight the importance of comparison-based schema abstraction in aiding retrieval of relevant prior knowledge from memory. In this talk, we discuss an alternative view, the category status hypothesis—which states that if knowledge of a target principle is represented as a relational category, it is easier to activate as a result of categorizing (as opposed to cue-based reminding)—and briefly review supporting evidence. We then further investigate this hypothesis by designing study tasks that promote different facets of category-level representations and assess their impact on spontaneous analogical transfer. A Baseline group compared two analogous cases; the remaining groups experienced comparison plus another task intended to impact the category status of the knowledge representation. The Intension group read an abstract statement of the principle with a supporting task of generating a new case. The Extension group read two more positive cases with the task of judging whether each exemplified the target principle. The Mapping group read a contrast case with the task of revising it into a positive example of the target principle (thereby providing practice moving in both directions between type and token, i.e., evaluating a given case relative to knowledge and using knowledge to generate a revised case). The results demonstrated that both Intension and Extension groups led to transfer improvements over Baseline (with the former demonstrating both improved accessibility of prior knowledge and ability to apply relational concepts). Implications for theories of analogical transfer are discussed.