TriCLFF: a multi-modal feature fusion framework using contrastive learning for spatial domain identification

TriCLFF:一种利用对比学习进行空间域识别的多模态特征融合框架

阅读:2

Abstract

Spatial transcriptomics (ST) encompasses rich multi-modal information related to cell state and organization. Precisely identifying spatial domains with consistent gene expression patterns and histological features is a critical task in ST analysis, which requires comprehensive integration of multi-modal information. Here, we propose TriCLFF, a contrastive learning-based multi-modal feature fusion framework, to effectively integrate spatial associations, gene expression levels, and histological features in a unified manner. Leveraging an advanced feature fusion mechanism, our proposed TriCLFF framework outperforms existing state-of-the-art methods in terms of accuracy and robustness across four datasets (mouse brain anterior, mouse olfactory bulb, human dorsolateral prefrontal cortex, and human breast cancer) from different platforms (10x Visium and Stereo-seq) for spatial domain identification. TriCLFF also facilitates the identification of finer-grained structures in breast cancer tissues and detects previously unknown gene expression patterns in the human dorsolateral prefrontal cortex, providing novel insights for understanding tissue functions. Overall, TriCLFF establishes an effective paradigm for integrating spatial multi-modal data, demonstrating its potential for advancing ST research. The source code of TriCLFF is available online at https://github.com/HBZZ168/TriCLFF.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。