Abstract
With the great success of large language models in natural language processing, self-supervised pre-training models have emerged as an important technique in drug discovery. In particular, multimodal pre-training models have opened a new avenue for drug discovery. The experience and ideas from previous works can provide important reference points for further research in drug discovery. Therefore, this review summarizes the foundation of multimodal pre-training models and their progress in the field of drug discovery. We emphasize the adaptability between various modalities and network frameworks or pre-training tasks. At the same time, we summarize the difference and relevance between various modalities or pre-training models. Importantly, we identify two increasing trends that may serve as reference points for future research. Specifically, Transformers and graph neural networks are often integrated as encoders and then combined with multiple pre-training tasks to learn cross-scale molecular representation, thereby promoting the accuracy of drug discovery. In addition, molecular captions as brief biomedical text provide a bridge for collaboration between drug discovery and large language models. Finally, we discuss the challenges of multimodal pre-training models in drug discovery, and explore future opportunities.