Synergizing multimodal data and fingerprint space exploration for mechanism of action prediction

协同多模态数据和指纹空间探索用于作用机制预测

阅读:1

Abstract

MOTIVATION: Effective computational methods for predicting the mechanism of action (MoA) of compounds are essential in drug discovery. Current MoA prediction models mainly utilize the structural information of compounds. However, high-throughput screening technologies have generated more targeted cell perturbation data for MoA prediction, a factor frequently disregarded by the majority of current approaches. Moreover, exploring the commonalities and specificities among different fingerprint representations remains challenging. RESULTS: In this paper, we propose IFMoAP, a model integrating cell perturbation image and fingerprint data for MoA prediction. Firstly, we modify the Res-Net to accommodate the feature extraction of five-channel cell perturbation images and establish a granularity-level attention mechanism to combine coarse- and fine-grained features. To learn both common and specific fingerprint features, we introduce an FP-CS module, projecting four fingerprint embeddings into distinct spaces and incorporating two loss functions for effective learning. Finally, we construct two independent classifiers based on image and fingerprint features for prediction and for weighting the two prediction scores. Experimental results demonstrate that our model achieves highest accuracy of 0.941 when using multimodal data. The comparison with other methods and explorations further highlights the superiority of our proposed model and the complementary characteristics of multimodal data. AVAILABILITY AND IMPLEMENTATION: The source code is available at https://github.com/ s1mplehu/IFMoAP. The raw image data of Cell Painting can be accessed from Figshare (https://doi.org/10.17044/scilifelab.21378906).

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。