Maximum Entropy Baseline for Integrated Gradients

Integrated Gradients (IG), one of the most popular explainability methods available, still remains ambiguous in the selection of baseline, which may seriously impair the credibility of the explanations. This study proposes a new uniform baseline, i.e., the Maximum Entropy Baseline, which is consistent with the “uninformative” property of baselines defined in IG. In addition, we propose an improved ablating evaluation approach incorporating the new baseline, where the information conservativeness is maintained. We explain the linear transformation invariance of IG baselines from an information perspective. Finally, we assess the reliability of the explanations generated by different explainability methods and different IG baselines through extensive evaluation experiments.

  • Published in:
    International Joint Conference on Neural Networks
  • Type:
    Inproceedings
  • Authors:
    Tan, Hanxiao
  • Year:
    2023

Citation information

Tan, Hanxiao: Maximum Entropy Baseline for Integrated Gradients, International Joint Conference on Neural Networks, 2023, https://ieeexplore.ieee.org/document/10191554, Tan.2023a,

Associated Lamarr Researchers

lamarr institute person hanxiao tan - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Hanxiao Tan

Scientist to the profile