Explainability
explainability
Dr. Robert Legenstein
For the recently established Cluster of Excellence CoE Bilateral Artificial Intelligence (BILAI), funded by the Austrian Science Fund (FWF), we are looking for more than 50 PhD students and 10 Post-Doc researchers (m/f/d) to join our team at one of the six leading research institutions across Austria. In BILAI, major Austrian players in Artificial Intelligence (AI) are teaming up to work towards Broad AI. As opposed to Narrow AI, which is characterized by task-specific skills, Broad AI seeks to address a wide array of problems, rather than being limited to a single task or domain. To develop its foundations, BILAI employs a Bilateral AI approach, effectively combining sub-symbolic AI (neural networks and machine learning) with symbolic AI (logic, knowledge representation, and reasoning) in various ways. Harnessing the full potential of both symbolic and sub-symbolic approaches can open new avenues for AI, enhancing its ability to solve novel problems, adapt to diverse environments, improve reasoning skills, and increase efficiency in computation and data use. These key features enable a broad range of applications for Broad AI, from drug development and medicine to planning and scheduling, autonomous traffic management, and recommendation systems. Prioritizing fairness, transparency, and explainability, the development of Broad AI is crucial for addressing ethical concerns and ensuring a positive impact on society. The research team is committed to cross-disciplinary work in order to provide theory and models for future AI and deployment to applications.
Rik Sarkar
We are looking for PhD students at the University of Edinburgh for research focused on: Machine learning and optimization algorithms, Generative AI and artificial data, Privacy, fairness and explainability, Topological and Geometric data analysis and other similar areas.
Tina Eliassi-Rad
The RADLAB at Northeastern University’s Network Science Institute has two postdoctoral positions available. We are looking for exceptional candidates who are interested in the following programs: 1. Trustworthy Network Science: As the use of machine learning in network science grows, so do the issues of stability, robustness, explainability, transparency, and fairness, to name a few. We address issues of trustworthy ML in network science. 2. Just Machine Learning: Machine learning systems are not islands. They are part of broader complex systems. To understand and mitigate the risks and harms of using machine learning, we remove our optimization blinders and study the broader complex systems in which machine learning systems operate.
Justus Piater
The Intelligent and Interactive Systems lab uses machine learning to enhance the flexibility, robustness, generalization and explainability of robots and vision systems, focusing on methods for learning about structure, function, and other concepts that describe the world in actionable ways. Three University-Assistant Positions involve minor teaching duties with negotiable research topics within the lab's scope. One Project Position involves the integration of robotic perception and execution mechanisms for task-oriented object manipulation in everyday environments, with a focus on affordance-driven object part segmentation and object manipulation using reinforcement learning.
N/A
The PhD research focuses on the fairness, explainability, and robustness of machine learning systems within the framework of causal counterfactual analysis using formalisms from probabilistic graphical models, probabilistic circuits, and structural causal models.
Kerstin Ritter
The Department of Machine Learning for Clinical Neuroscience is currently recruiting PhD candidates and Postdocs. We develop advanced machine and deep learning models to analyze diverse clinical data, including neuroimaging, psychometric, clinical, smartphone, and omics datasets. While focusing on methodological challenges (explainability, robustness, multimodal data integration, causality etc.), the main goal is to enhance early diagnosis, predict disease progression, and personalize treatment for neurological and psychiatric diseases in diverse clinical settings. We offer an exciting and supportive environment with access to state-of-the-art compute facilities, mentoring and career advice through experienced faculty. Hertie AI closely collaborates with the world-class AI ecosystem in Tübingen (e.g. Cyber Valley, Cluster of Excellence “Machine Learning in Science”, Tübingen AI Center).
Cameron Buckner
The Department of Philosophy in the College of Liberal Arts and Sciences at the University of Florida invites applications for a Post-doctoral Associate to work on research projects in the philosophy and ethics of artificial intelligence led by Dr. Cameron Buckner, Professor of Philosophy and the Donald F. Cronin Chair in the Humanities beginning August 16, 2025. We are especially interested in individuals with both philosophical background and an understanding of recent machine learning technologies to work on topics related to explainability, interpretability, and/or the use of machine learning methods to model human cognition, as well as related ethical and epistemic issues. The department has an established strength in the philosophy of AI, and the associate will have the opportunity to interact and potentially collaborate with other department members working in this area, including David Grant, Duncan Purves, and Amber Ross, as well as numerous AI researchers in other disciplines. The University of Florida has for the last several years been engaged in an ambitious artificial intelligence initiative for research and teaching, including interdisciplinary research—which includes access to HiPerGator, one of the most powerful high-performance computers at a US public university and NaviGator AI, an API providing easy access to many state of the art Large Language Models and multimodal generative AI systems.
Steve Schneider
The School of Computer Science and Electronic Engineering is seeking to recruit a full-time Lecturer in Natural Language Processing to grow our AI research. The School is home to two established research centres with expertise in AI and Machine Learning: the Computer Science Research Centre and the Centre for Vision, Speech and Signal Processing (CVSSP). This post is aligned to the Nature Inspired Computer and Engineering group within Computer Science. This role encourages applicants from the areas of natural language processing including language modelling, language generation (machine translation/summarisation), explainability and reasoning in NLP, and/or aligned multimodal challenges for NLP (vision-language, audio-language, and so on) and we are particularly interested in candidates who enhance our current strengths and bring complementary areas of AI expertise. Surrey has an established international reputation in AI research, 1st in the UK for computer vision and top 10 for AI, computer vision, machine learning and natural language processing (CSRankings.org) and were 7th in the UK for REF2021 outputs in Computer Science research. Computer Science and CVSSP are at the core of the Surrey Institute for People-Centred AI (PAI), established in 2021 as a pan-University initiative which brings together leading AI research with cross-discipline expertise across health, social, behavioural, and engineering sciences, and business, law, and the creative arts to shape future AI to benefit people and society. PAI leads a portfolio of £100m in grant awards including major research activities in creative industries and healthcare, and two doctoral training programmes with funding for over 100 PhD researchers: the UKRI AI Centre for Doctoral Training in AI for Digital Media Inclusion, and the Leverhulme Trust Doctoral Training Network in AI-Enabled Digital Accessibility.
A Game Theoretical Framework for Quantifying Causes in Neural Networks
Which nodes in a brain network causally influence one another, and how do such interactions utilize the underlying structural connectivity? One of the fundamental goals of neuroscience is to pinpoint such causal relations. Conventionally, these relationships are established by manipulating a node while tracking changes in another node. A causal role is then assigned to the first node if this intervention led to a significant change in the state of the tracked node. In this presentation, I use a series of intuitive thought experiments to demonstrate the methodological shortcomings of the current ‘causation via manipulation’ framework. Namely, a node might causally influence another node, but how much and through which mechanistic interactions? Therefore, establishing a causal relationship, however reliable, does not provide the proper causal understanding of the system, because there often exists a wide range of causal influences that require to be adequately decomposed. To do so, I introduce a game-theoretical framework called Multi-perturbation Shapley value Analysis (MSA). Then, I present our work in which we employed MSA on an Echo State Network (ESN), quantified how much its nodes were influencing each other, and compared these measures with the underlying synaptic strength. We found that: 1. Even though the network itself was sparse, every node could causally influence other nodes. In this case, a mere elucidation of causal relationships did not provide any useful information. 2. Additionally, the full knowledge of the structural connectome did not provide a complete causal picture of the system either, since nodes frequently influenced each other indirectly, that is, via other intermediate nodes. Our results show that just elucidating causal contributions in complex networks such as the brain is not sufficient to draw mechanistic conclusions. Moreover, quantifying causal interactions requires a systematic and extensive manipulation framework. The framework put forward here benefits from employing neural network models, and in turn, provides explainability for them.
Seeing things clearly: Image understanding through hard-attention and reasoning with structured knowledges
In this talk, Jonathan aims to frame the current challenges of explainability and understanding in ML-driven approaches to image processing, and their potential solution through explicit inference techniques.