On the Efficient Explanation of Outlier Detection Ensembles Through Shapley Values

Feature bagging models have revealed their practical usability in various contexts, among them in outlier detection, where they build ensembles to reliably assign outlier scores to data samples. However, the interpretability of so-obtained outlier detection methods is far from achieved. Among the standard black-box models interpretability approaches, we find Shapley values that clarify the roles of single inputs. However, Shapley values are characterized by high computational runtimes that make them useful in pretty low-dimensional applications. We propose bagged Shapley values, a method to achieve interpretability of feature bagging ensembles, especially for outlier detection. The method not only assigns local importance scores to each feature of the initial space, helping to increase the interpretability but also solves the computational issue; specifically, the bagged Shapley values can be exactly computed in polynomial time.

Citation information

Klüttermann, Simon; Balestra, Chiara; Müller, Emmanuel: On the Efficient Explanation of Outlier Detection Ensembles Through Shapley Values, Advances in Knowledge Discovery and Data Mining. PAKDD 2024, 2024, https://link.springer.com/chapter/10.1007/978-981-97-2259-4_4, Kluettermann.etal.2024d,

Associated Lamarr Researchers

lamarr institute person Mueller Emmanuel - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Emmanuel Müller

Principal Investigator Trustworthy AI to the profile