stGCL: a versatile cross-modality fusion method based on multi-modal graph contrastive learning for spatial transcriptomics

stGCL:一种基于多模态图对比学习的空间转录组学通用跨模态融合方法

阅读:2

Abstract

Advances in spatial transcriptomics have enabled high-resolution mapping of tissue architecture at the molecular level, yet integrating its multi-modal data remains challenging. Here, we present stGCL, a framework for accurate and robust integration of gene expression, spatial coordinates, and histological features. stGCL employs a histology-based Vision Transformer to extract morphological features and a multi-modal graph autoencoder with contrastive learning for cross-modal fusion. In addition, we introduce a spatial coordinate correction and registration strategy to support multi-slice integration. We demonstrate that stGCL reliably identifies spatial domains, integrates vertical and horizontal tissue slices, and highlight its generalizability across platforms and resolutions. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13059-025-03896-w.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。