SancScreen: Towards a real-world dataset for evaluating explainability methods
Quantitatively evaluating explainability methods is a notoriously hard endeavor. One reason for this is the lack of real-world benchmark datasets that contain local feature importance annotations done by domain experts. We present SancScreen, a dataset from the domain of financial sanction screening. It allows for both evaluating explainability methods and uncovering errors made during model training. We showcase two possible ways to use the dataset for evaluating and debugging a Random Forest and a Neural Network model. For evaluation, we compare a total of 8 configurations of state-of-the-art explainability methods to the expert annotations
- Published in:
Lernen. Wissen. Daten. Analysen. - Type:
Inproceedings - Authors:
Jakobs, Matthias; Kotthaus, Helena; Röder, Ines; Baritz, Maximilian - Year:
2022
Citation information
Jakobs, Matthias; Kotthaus, Helena; Röder, Ines; Baritz, Maximilian: SancScreen: Towards a real-world dataset for evaluating explainability methods, Lernen. Wissen. Daten. Analysen., 2022, https://www.semanticscholar.org/paper/SancScreen%3A-Towards-a-Real-world-Dataset-for-Jakobs-Kotthaus/465c6e896e5e2169b47ec756308e5aa4bb59c46d, Jakobs.etal.2022a,
@Inproceedings{Jakobs.etal.2022a,
author={Jakobs, Matthias; Kotthaus, Helena; Röder, Ines; Baritz, Maximilian},
title={SancScreen: Towards a real-world dataset for evaluating explainability methods},
booktitle={Lernen. Wissen. Daten. Analysen.},
url={https://www.semanticscholar.org/paper/SancScreen%3A-Towards-a-Real-world-Dataset-for-Jakobs-Kotthaus/465c6e896e5e2169b47ec756308e5aa4bb59c46d},
year={2022},
abstract={Quantitatively evaluating explainability methods is a notoriously hard endeavor. One reason for this is the lack of real-world benchmark datasets that contain local feature importance annotations done by domain experts. We present SancScreen, a dataset from the domain of financial sanction screening. It allows for both evaluating explainability methods and uncovering errors made during model...}}