Re-interpreting Rules Interpretability

Trustworthy machine learning requires a high level of interpretability of machine learning models, yet many models are inherently black-boxes. Training interpretable models instead—or using them to mimic the black-box model—seems like a viable solution. In practice, however, these interpretable models are still unintelligible due to their size and complexity. In this paper, we present an approach to explain the logic of large interpretable models that can be represented as sets of logical rules by a simple, and thus intelligible, descriptive model. The coarseness of this descriptive model and its fidelity to the original model can be controlled, so that a user can understand the original model in varying levels of depth. We showcase and discuss this approach on three real-world problems from healthcare, material science, and finance.

  • Published in:
    International Journal of Data Science and Analytics
  • Type:
    Article
  • Authors:
    Adilova, Linara; Kamp, Michael; Andrienko, Gennady; Andrienko, Natalia
  • Year:
    2023

Citation information

Adilova, Linara; Kamp, Michael; Andrienko, Gennady; Andrienko, Natalia: Re-interpreting Rules Interpretability, International Journal of Data Science and Analytics, 2023, https://link.springer.com/article/10.1007/s41060-023-00398-5, Adilova.etal.2023a,

Associated Lamarr Researchers

lamarr institute person Adilova Linara - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Linara Adilova

Autorin to the profile
lamarr institute person Andriyenko Gennadiy pi - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Gennady Andrienko

Principal Investigator Human-centered AI Systems to the profile
lamarr institute person Andriyenko Nathaliya pi - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Natalia Andrienko

Area Chair Human-centered AI Systems to the profile