Put it to the Test: Getting Serious about Explanation in Explainable Artificial Intelligence

Artificial Intelligence ({AI}) has become a topic of major interest to philosophers of science. Among the issues commonly discussed is {AI}’s opacity. To remedy opacity, scientists have provided methods commonly subsumed under the label ‘{eXplaibable} Artificial Intelligence’ ({XAI}) that aim to make {AI} and its outputs ‘interpretable’ and ‘explainable’. However, there is little interaction between developments in {XAI} and philosophical debates on scientific explanation. We here improve on this situation and argue for a descriptive and a normative thesis: (i) When suitably embedded into scientific research processes, {XAI} methods’ outputs can facilitate genuine scientific understanding. (ii) In order for {XAI} outputs to fulfill this function, they should be made testable. We will support our theses by building on recent and long-standing ideas from philosophy of science, by comparing them to a recent framework from the {XAI} community, and by showcasing their applicability to case studies from the life sciences.

Citation information

Boge, Florian J.; Mosig, Axel: Put it to the Test: Getting Serious about Explanation in Explainable Artificial Intelligence, Workshop on Ethics of AI (Un-)Explainability, 2025, https://philsci-archive.pitt.edu/25303/, Boge.Mosig.2025a,