Calculation of exact Shapley values for support vector machines with Tanimoto kernel enables model interpretation
The support vector machine (SVM) algorithm is popular in chemistry and drug discovery. SVM models have black box character. Their predictions can be interpreted through feature weighting or the model-agnostic Shapley additive explanations (SHAP) formalism that locally approximates Shapley values (SVs) originating from game theory. We introduce an algorithm termed SV-expressed Tanimoto similarity (SVETA) for the exact calculation of SVs to explain SVM models employing the Tanimoto kernel, the gold standard for the assessment of molecular similarity. For a model system, the exact calculation of SVs is demonstrated. In an SVM-based compound classification task from drug discovery, only a limited correlation between exact SV and SHAP values is observed, prohibiting the use of approximate values for rationalizing predictions. For exemplary test compounds, atom-based mapping of prioritized features delineates coherent substructures that closely resemble those obtained by analyzing independently derived random forest models, thus providing consistent explanations.
- Published in:
iScience - Type:
Article - Authors:
Feldmann, Christian; Bajorath, Jürgen - Year:
2022
Citation information
Feldmann, Christian; Bajorath, Jürgen: Calculation of exact Shapley values for support vector machines with Tanimoto kernel enables model interpretation, iScience, 2022, 25, 105023, https://linkinghub.elsevier.com/retrieve/pii/S2589004222012950, Feldmann.Bajorath.2022a,
@Article{Feldmann.Bajorath.2022a,
author={Feldmann, Christian; Bajorath, Jürgen},
title={Calculation of exact Shapley values for support vector machines with Tanimoto kernel enables model interpretation},
journal={iScience},
volume={25},
pages={105023},
url={https://linkinghub.elsevier.com/retrieve/pii/S2589004222012950},
year={2022},
abstract={The support vector machine (SVM) algorithm is popular in chemistry and drug discovery. SVM models have black box character. Their predictions can be interpreted through feature weighting or the model-agnostic Shapley additive explanations (SHAP) formalism that locally approximates Shapley values (SVs) originating from game theory. We introduce an algorithm termed SV-expressed Tanimoto similarity...}}