On the Efficient Explanation of Outlier Detection Ensembles Through Shapley Values
Feature bagging models have revealed their practical usability in various contexts, among them in outlier detection, where they build ensembles to reliably assign outlier scores to data samples. However, the interpretability of so-obtained outlier detection methods is far from achieved. Among the standard black-box models interpretability approaches, we find Shapley values that clarify the roles of single inputs. However, Shapley values are characterized by high computational runtimes that make them useful in pretty low-dimensional applications. We propose bagged Shapley values, a method to achieve interpretability of feature bagging ensembles, especially for outlier detection. The method not only assigns local importance scores to each feature of the initial space, helping to increase the interpretability but also solves the computational issue; specifically, the bagged Shapley values can be exactly computed in polynomial time.
- Published in:
Advances in Knowledge Discovery and Data Mining. PAKDD 2024 - Type:
Inproceedings - Authors:
Klüttermann, Simon; Balestra, Chiara; Müller, Emmanuel - Year:
2024 - Source:
https://link.springer.com/chapter/10.1007/978-981-97-2259-4_4
Citation information
Klüttermann, Simon; Balestra, Chiara; Müller, Emmanuel: On the Efficient Explanation of Outlier Detection Ensembles Through Shapley Values, Advances in Knowledge Discovery and Data Mining. PAKDD 2024, 2024, https://link.springer.com/chapter/10.1007/978-981-97-2259-4_4, Kluettermann.etal.2024d,
@Inproceedings{Kluettermann.etal.2024d,
author={Klüttermann, Simon; Balestra, Chiara; Müller, Emmanuel},
title={On the Efficient Explanation of Outlier Detection Ensembles Through Shapley Values},
booktitle={Advances in Knowledge Discovery and Data Mining. PAKDD 2024},
url={https://link.springer.com/chapter/10.1007/978-981-97-2259-4_4},
year={2024},
abstract={Feature bagging models have revealed their practical usability in various contexts, among them in outlier detection, where they build ensembles to reliably assign outlier scores to data samples. However, the interpretability of so-obtained outlier detection methods is far from achieved. Among the standard black-box models interpretability approaches, we find Shapley values that clarify the roles...}}