Evaluating Sensitivity Consistency of Explanations

While the performance of deep neural networks is rapidly developing, their reliability is increasingly receiving more attention. Explainability methods are one of the most relevant tools to enhance reliability, mainly by highlighting important input features for the explanation purpose. Although numerous explainability methods have been proposed, their assessment remains challenging due to the absence of ground truth. Several existing studies propose evaluation methods from a certain aspect, e.g., fidelity, robustness, etc. However, they typically address only one property of explanations, and thus more assessing perspectives contribute to a better explanation evaluating system. This work proposes an evaluation method from a novel perspective called sensitivity consistency, where the intuition behind is that features and parameters that strongly impact the predictions and explanations should be highly consistent and vise versa. Extensive experiments on different datasets and models evaluate popular explainability methods while providing qualitative and quantitative results. Our approach further complements the existing evaluation systems and aims to facilitate the proposal of an acknowledged explanation evaluation methodology.

  • Veröffentlicht in:
    Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
  • Typ:
    Inproceedings
  • Autoren:
    Tan, Hanxiao
  • Jahr:
    2025

Informationen zur Zitierung

Tan, Hanxiao: Evaluating Sensitivity Consistency of Explanations, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2025, Tan.2025a,

Assoziierte Lamarr-ForscherInnen

lamarr institute person hanxiao tan 1 - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Hanxiao Tan

Wissenschaftler zum Profil