Multimodal pre-training models of molecular representation for drug discovery

用于药物发现的分子表征多模态预训练模型

阅读:3

Abstract

With the great success of large language models in natural language processing, self-supervised pre-training models have emerged as an important technique in drug discovery. In particular, multimodal pre-training models have opened a new avenue for drug discovery. The experience and ideas from previous works can provide important reference points for further research in drug discovery. Therefore, this review summarizes the foundation of multimodal pre-training models and their progress in the field of drug discovery. We emphasize the adaptability between various modalities and network frameworks or pre-training tasks. At the same time, we summarize the difference and relevance between various modalities or pre-training models. Importantly, we identify two increasing trends that may serve as reference points for future research. Specifically, Transformers and graph neural networks are often integrated as encoders and then combined with multiple pre-training tasks to learn cross-scale molecular representation, thereby promoting the accuracy of drug discovery. In addition, molecular captions as brief biomedical text provide a bridge for collaboration between drug discovery and large language models. Finally, we discuss the challenges of multimodal pre-training models in drug discovery, and explore future opportunities.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。