Potential inconsistencies or artifacts in deriving and interpreting deep learning models and key criteria for scientifically sound applications in the life sciences

In evaluating recent reports of –or commentaries on– deep learning (DL) applications in drug design and life science research using language models (LMs), graph neural networks (GNNs), or other methods, partly controversial or problematics views, assumptions, or conclusions have been noted. Some of these aspects are discussed including possible pitfalls and caveats of such DL studies that might not be sufficiently considered and likely lead to misunderstandings and/or compromise model relevance and impact. In addition, key criteria for meaningful applications of LMs are highlighted at different levels. It is hoped that the discussion will be useful for potential authors in planning, conducting, and analyzing their studies.

  • Veröffentlicht in:
    Artificial Intelligence in the Life Sciences
  • Typ:
    Article
  • Autoren:
    Bajorath, Jürgen
  • Jahr:
    2024

Informationen zur Zitierung

Bajorath, Jürgen: Potential inconsistencies or artifacts in deriving and interpreting deep learning models and key criteria for scientifically sound applications in the life sciences, Artificial Intelligence in the Life Sciences, 2024, 5, 100093, https://www.sciencedirect.com/science/article/pii/S2667318523000375, Bajorath.2024a,

Assoziierte Lamarr-ForscherInnen

lamarr institute person Bajorath Juergen - Lamarr Institute for Machine Learning (ML) and Artificial Intelligence (AI)

Prof. Dr. Jürgen Bajorath

Area Chair Life Sciences zum Profil