Home 9 Publications 9 What do we want from Explainable Artificial In­tel­li­gence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Inter­dis­ci­plin­ary XAI Research

What do we want from Explainable Artificial In­tel­li­gence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Inter­dis­ci­plin­ary XAI Research

Author: M. Langer, D. Oster, T. Speith, H. Hermanns, L. Kästner, E. Schmidt, A. Sesing, K. Baum
Journal: Artificial Intelligence
Year: 2021

Citation information

M. Langer, D. Oster, T. Speith, H. Hermanns, L. Kästner, E. Schmidt, A. Sesing, K. Baum:
What do we want from Explainable Artificial In­tel­li­gence (XAI)? – A Stakeholder Perspective on XAI and a Conceptual Model Guiding Inter­dis­ci­plin­ary XAI Research.
Artificial Intelligence,
2021,
296,
103473,
https://doi.org/10.1016/j.artint.2021.103473

Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders’ desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders’ desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders’ desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.