Check Mate: A Sanity Check for Trustworthy AI

Methods of Explainable AI (XAI) try to illuminate the decision making process of complex Machine Learning models by generating explanations. However, for most real-world data there is no “groundtruth” explanation, which makes evaluating the correctness of XAI methods and model decisions difficult. Often visual assessment or anecdotal evidence is the only type of evaluation. In this work we propose to
use the game of chess as a source of “near ground-truth” (NGT) explanations, which XAI methods can be compared against using various metrics, serving as a “sanity check”. We demonstrate this process in an experiment with a deep convolutional neural network, to which we apply a range of commonly used XAI methods. As our main contribution, we publish our data set of 30 million chess positions along with
their NGT explanations for free use in XAI research.

  • Published in:
    Lernen, Wissen, Daten, Analysen
  • Type:
    Inproceedings
  • Authors:
    Mücke, Sascha; Pfahler, Lukas
  • Year:
    2022

Citation information

Mücke, Sascha; Pfahler, Lukas: Check Mate: A Sanity Check for Trustworthy AI, Lernen, Wissen, Daten, Analysen, 2022, Muecke.Pfahler.2022a,

Associated Lamarr Researchers

lamarr institute person Mucke Sascha - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Sascha Mücke

Author to the profile