The Generalizability of Explanations
Due to the absence of ground truth, objective evaluation of explainability methods is an essential research direction. So far, the vast majority of evaluations can be summarized into three categories, namely human evaluation, sensitivity testing, and salinity check. This work proposes a novel evaluation methodology from the perspective of generalizability. We employ an encoding-decoding module to learn the distributions of the generated explanations and observe their learnability as well as the plausibility of the learned distributional features. First we briefly demonstrate the evaluation idea of the proposed approach at {LIME}, and then quantitatively evaluate multiple popular explainability methods. We also find that smoothing the explanations with {SmoothGrad} can significantly enhance their generalizability.
- Published in:
International Joint Conference on Neural Networks (IJCNN) - Type:
Inproceedings - Year:
2023 - Source:
https://ieeexplore.ieee.org/document/10191972
Citation information
: The Generalizability of Explanations, International Joint Conference on Neural Networks (IJCNN), 2023, 1--8, June, https://ieeexplore.ieee.org/document/10191972, Tan.2023b,
@Inproceedings{Tan.2023b,
author={Tan, Hanxiao},
title={The Generalizability of Explanations},
booktitle={International Joint Conference on Neural Networks (IJCNN)},
pages={1--8},
month={June},
url={https://ieeexplore.ieee.org/document/10191972},
year={2023},
abstract={Due to the absence of ground truth, objective evaluation of explainability methods is an essential research direction. So far, the vast majority of evaluations can be summarized into three categories, namely human evaluation, sensitivity testing, and salinity check. This work proposes a novel evaluation methodology from the perspective of generalizability. We employ an encoding-decoding module to...}}