SancScreen: Towards a real-world dataset for evaluating explainability methods

Author: Jakobs, Matthias; Kotthaus, Helena; Röder, Ines; Baritz, Maximilian
Booktitle: Lernen. Wissen. Daten. Analysen.
Year: 2022

Citation information

Jakobs, Matthias; Kotthaus, Helena; Röder, Ines; Baritz, Maximilian:
SancScreen: Towards a real-world dataset for evaluating explainability methods.
Lernen. Wissen. Daten. Analysen.,
2022,
https://www.semanticscholar.org/paper/SancScreen%3A-Towards-a-Real-world-Dataset-for-Jakobs-Kotthaus/465c6e896e5e2169b47ec756308e5aa4bb59c46d

Quantitatively evaluating explainability methods is a notoriously hard endeavor. One reason for this is the lack of real-world benchmark datasets that contain local feature importance annotations done by domain experts. We present SancScreen, a dataset from the domain of financial sanction screening. It allows for both evaluating explainability methods and uncovering errors made during model training. We showcase two possible ways to use the dataset for evaluating and debugging a Random Forest and a Neural Network model. For evaluation, we compare a total of 8 configurations of state-of-the-art explainability methods to the expert annotations