Harnessing Prior Knowledge for Explainable Machine Learning: An Overview
The application of complex machine learning models has elicited research to make them more explainable. However, most explainability methods cannot provide insight beyond the given data, requiring additional information about the context. We argue that harnessing prior knowledge improves the accessibility of explanations. We hereby present an overview of integrating prior knowledge into machine learning systems in order to improve explainability. We introduce a categorization of current research into three main categories which integrate knowledge either into the machine learning pipeline, into the explainability method or derive knowledge from explanations. To classify the papers, we build upon the existing taxonomy of informed machine learning and extend it from the perspective of explainability. We conclude with open challenges and research directions.
- Published in:
IEEE Conference on Secure and Trustworthy Machine Learning - Type:
Inproceedings - Authors:
Beckh, Katharina; Müller, Sebastian; Jakobs, Matthias; Toborek, Vanessa; Tan, Hanxiao; Fischer, Raphael; Welke, Pascal; Houben, Sebastian; von Rueden, Laura - Year:
2023
Citation information
Beckh, Katharina; Müller, Sebastian; Jakobs, Matthias; Toborek, Vanessa; Tan, Hanxiao; Fischer, Raphael; Welke, Pascal; Houben, Sebastian; von Rueden, Laura: Harnessing Prior Knowledge for Explainable Machine Learning: An Overview, IEEE Conference on Secure and Trustworthy Machine Learning, 2023, https://ieeexplore.ieee.org/document/10136139, Beckh.etal.2023a,
@Inproceedings{Beckh.etal.2023a,
author={Beckh, Katharina; Müller, Sebastian; Jakobs, Matthias; Toborek, Vanessa; Tan, Hanxiao; Fischer, Raphael; Welke, Pascal; Houben, Sebastian; von Rueden, Laura},
title={Harnessing Prior Knowledge for Explainable Machine Learning: An Overview},
booktitle={IEEE Conference on Secure and Trustworthy Machine Learning},
url={https://ieeexplore.ieee.org/document/10136139},
year={2023},
abstract={The application of complex machine learning models has elicited research to make them more explainable. However, most explainability methods cannot provide insight beyond the given data, requiring additional information about the context. We argue that harnessing prior knowledge improves the accessibility of explanations. We hereby present an overview of integrating prior knowledge into machine...}}