Integrating human knowledge for explainable {AI}
This paper presents a methodology for integrating human expert knowledge into machine learning ({ML}) workflows to improve both model interpretability and the quality of explanations produced by explainable {AI} ({XAI}) techniques. We strive to enhance standard {ML} and {XAI} pipelines without modifying underlying algorithms, focusing instead on embedding domain knowledge at two stages: (1) during model development through expert-guided data structuring and feature engineering, and (2) during explanation generation via domain-aware synthetic neighbourhoods. Visual analytics is used to support experts in transforming raw data into semantically richer representations. We validate the methodology in two case studies: predicting {COVID}-19 incidence and classifying vessel movement patterns. The studies demonstrated improved alignment of models with expert reasoning and better quality of synthetic neighbourhoods. We also explore using large language models ({LLMs}) to assist experts in developing domain-compliant data generators. Our findings highlight both the benefits and limitations of existing {XAI} methods and point to a research direction for addressing these gaps.
- Published in:
Machine Learning - Type:
Article - Authors:
- Year:
2025 - Source:
https://doi.org/10.1007/s10994-025-06879-x
Citation information
: Integrating human knowledge for explainable {AI}, Machine Learning, 2025, 114, 11, 250, October, https://doi.org/10.1007/s10994-025-06879-x, Cappuccio.etal.2025a,
@Article{Cappuccio.etal.2025a,
author={Cappuccio, Eleonora; Kathirgamanathan, Bahavathy; Rinzivillo, Salvatore; Andrienko, Gennady; Andrienko, Natalia},
title={Integrating human knowledge for explainable {AI}},
journal={Machine Learning},
volume={114},
number={11},
pages={250},
month={October},
url={https://doi.org/10.1007/s10994-025-06879-x},
year={2025},
abstract={This paper presents a methodology for integrating human expert knowledge into machine learning ({ML}) workflows to improve both model interpretability and the quality of explanations produced by explainable {AI} ({XAI}) techniques. We strive to enhance standard {ML} and {XAI} pipelines without modifying underlying algorithms, focusing instead on embedding domain knowledge at two stages: (1)...}}