AI Colloquium with Prof. Barbara Hammer on Explainable AI

Challenges and Perspectives of Explainable AI

Guest Speaker: Prof. Barbara Hammer, Bielefeld University

Explainable AI (XAI) is a key component of trustworthy AI—especially in safety-critical domains. Yet it often remains unclear what information XAI methods actually provide and how reliable their explanations are.

In her talk, Prof. Barbara Hammer will offer insights into current research on explaining complex machine learning models. Using an example from critical infrastructure, she will illustrate how XAI technologies can be applied in practice. She will then discuss central challenges such as uniqueness, plausibility, and interpretability of explanations.

A particular focus will be on feature-based methods like SHAP, which are grounded in cooperative game theory. Prof. Hammer will explain how interactions between features can lead to varying explanation outcomes and how these effects can be systematically captured—especially in large language models and multimodal AI systems.

About the Speaker

Prof. Dr. Barbara Hammer leads the Machine Learning Group at CITEC, Bielefeld University. Her research focuses on trustworthy AI, lifelong learning, and hybrid AI approaches. She is involved in several international research initiatives and scientific committees, including the ERC Synergy Grant WaterFutures and Academia Europaea.

Details

Date

11. December 2025

10:15 - 11:45

Location

Dortmund

JvF25/3-303 - Conference Room (Lamarr/RC Trust Dortmund)

44227 Dortmund

Topics

Trustworthy Artificial Intelligence , Science

Tags

Event
AI Colloquium Barbara Hammer XAI Lamarr - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)
© Barbara Hammer

More events