Understanding uncertainty in deep learning builds confidence
In a contribution to Volume 1 of AILSCI, Lazic \& Williams address the issue of uncertainty in machine learning (ML) that has so far only been little considered in interdisciplinary research and drug discovery. In standard ML for compound classification or regression, only a single output value is produced for a test instance, with no additional information concerning the confidence of the prediction or the level of uncertainty associated with it.
Assessing the confidence or uncertainty of predictions adds another layer of information to ML that becomes particular important for judging its results in interdisciplinary settings. Moreover, if ML supports clinical decisions such as the prioritization of treatment strategies –one of the for AI in medicine– uncertainty assessment becomes essential. Hence, going forward, quantifying the uncertainty of predictions is an important topic for ML and especially deep learning (DL). Together with approaches for rationalizing ML/DL decisions, i.e., interpretable or explainable AI (XAI), uncertainty information also aids in model interpretation, decreases the black box character of ML/DL, and increases its acceptance in interdisciplinary research settings.
- Published in:
Artificial Intelligence in the Life Sciences - Type:
Article - Authors:
Bajorath, Jürgen - Year:
2022 - Source:
https://www.sciencedirect.com/science/article/pii/S2667318522000046?via=ihub
Citation information
Bajorath, Jürgen: Understanding uncertainty in deep learning builds confidence, Artificial Intelligence in the Life Sciences, 2022, 2, 100033, https://www.sciencedirect.com/science/article/pii/S2667318522000046?via=ihub, Bajorath.2022b,
@Article{Bajorath.2022b,
author={Bajorath, Jürgen},
title={Understanding uncertainty in deep learning builds confidence},
journal={Artificial Intelligence in the Life Sciences},
volume={2},
pages={100033},
url={https://www.sciencedirect.com/science/article/pii/S2667318522000046?via=ihub},
year={2022},
abstract={In a contribution to Volume 1 of AILSCI, Lazic \& Williams address the issue of uncertainty in machine learning (ML) that has so far only been little considered in interdisciplinary research and drug discovery. In standard ML for compound classification or regression, only a single output value is produced for a test instance, with no additional information concerning the confidence of the...}}