Auto Encoding Explanatory Examples with Stochastic Paths

In this paper we ask for the main factors that determine a classifier’s decision making process and uncover such factors by studying latent codes produced by auto-encoding frameworks. To deliver an explanation of a classifier’s behaviour, we propose a method that provides series of examples highlighting semantic differences between the classifier’s decisions. These examples are generated through interpolations in latent space. We introduce and formalize the notion of a semantic stochastic path, as a suitable stochastic process defined in feature (data) space via latent code interpolations. We then introduce the concept of semantic Lagrangians as a way to incorporate the desired classifier’s behaviour and find that the solution of the associated variational problem allows for highlighting differences in the classifier decision. Very importantly, within our framework the classifier is used as a black-box, and only its evaluation is required.

  • Published in:
    2020 25th International Conference on Pattern Recognition (ICPR) International Conference on Pattern Recognition (ICPR)
  • Type:
    Inproceedings
  • Authors:
    C. Ojeda, R. Sanchez, K. Cvejoski, J. Schuecker, D. Biesner, C. Bauckhage, B. Georgiev
  • Year:
    2021

Citation information

C. Ojeda, R. Sanchez, K. Cvejoski, J. Schuecker, D. Biesner, C. Bauckhage, B. Georgiev: Auto Encoding Explanatory Examples with Stochastic Paths, International Conference on Pattern Recognition (ICPR), 2020 25th International Conference on Pattern Recognition (ICPR), 2021, https://doi.org/10.1109/ICPR48806.2021.9413267, Ojeda.etal.2021,