Explainability Methods from Machine Learning Detect Important Drugs' Atoms in Drug-Target Interactions

机器学习的可解释性方法可检测药物-靶点相互作用中的重要药物原子

阅读:3

Abstract

Predicting drug-target interactions (DTI) with graph neural networks (GNNs) is hindered by their lack of interpretability. To address this, we benchmark four explainable artificial intelligence (XAI) attribution methods on GNN models trained for kinase and G-protein-coupled receptors (GPCR) targets. We assess the methods' consistency through atom-level intersection over union (IoU) and validate their biological relevance by mapping attributed atoms to three-dimensional (3D) protein-ligand structures. While consistency across methods was modest, consensus attributions were highly enriched for atoms directly contacting the binding pocket─up to 76% within 2 Å in the kinase-inhibitor complexes. Notably, these attributed atoms were frequently found contacting experimentally important regulatory residues such as those in the DFG motif. This indicates that XAI methods, despite their disagreements, can identify chemically meaningful ligand features, providing a foundation for developing more interpretable GNNs in drug discovery.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。