Enhancing LncRNA-miRNA interaction prediction with multimodal contrastive representation learning

利用多模态对比表征学习增强lncRNA-miRNA相互作用预测

阅读:1

Abstract

Interactions between long non-coding RNAs (lncRNAs) and microRNAs (miRNAs) play an important role in the development of complex human diseases by collaboratively regulating gene transcription and expression. Therefore, identifying lncRNA-miRNA interactions (LMIs) is essential for diagnosing and treating complex human diseases. Because identifying LMIs with wet experiments is time-consuming and labor-intensive, some computational methods have been developed to infer LMIs. However, these approaches excel at utilizing single-modal information but struggle to integrate multimodal data from lncRNAs and miRNAs, which is essential for uncovering complex patterns in LMIs, ultimately limiting their performance. Therefore, this article proposes a novel multimodal contrastive representation learning model (MCRLMI) for LMI predictions. The model fully integrates multi-source similarity information and sequence encodings of lncRNAs and miRNAs. It leverages a graph convolutional network (GCN) and a Transformer to capture local neighborhood structural features and long-distance dependencies, respectively, enabling the collaborative modeling of structural and semantic information. Subsequently, to effectively integrate multimodal characteristics with encoded information, a multichannel attention mechanism and contrastive learning are introduced to fuse the extracted features. Finally, a Kolmogorov-Arnold Network (KAN) is trained with the optimized embeddings to predict LMIs. Extensive experiments show that the proposed MCRLMI consistently outperforms existing methods. Moreover, case studies further validate the potential of MCRLMI to identify novel LMIs in practical applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。