Multimodal Data Fusion for Whole-Slide Histopathology Image Classification

基于多模态数据融合的全切片组织病理图像分类

阅读:2

Abstract

Whole slide images (WSIs) are critical for cancer diagnosis but pose computational challenges due to their gigapixel resolution. While automated AI tools can accelerate diagnostic workflows, they often rely on precise annotations and require substantial training data. Integrating multimodal data-such as WSIs and corresponding pathology reports-offers a promising solution to improve classification accuracy and reduce diagnostic variability. In this study, we introduce MPath-Net, an end-to-end multimodal framework that combines WSIs and pathology reports for enhanced cancer subtype classification. Using the TCGA dataset (1684 cases: 916 kidney, 768 lung), we applied multiple-instance learning (MIL) for WSI feature extraction and Sentence-BERT for report encoding, followed by joint fine-tuning for tumor classification. MPath-Net achieved 94.65% accuracy, 0.9553 precision, 0.9472 recall, and 0.9473 F1-score, significantly outperforming baseline models (P < 0.05). In addition, attention heatmaps provided interpretable tumor tissue localization, demonstrating the clinical utility of our approach. These findings suggest that MPath-Net can support pathologists by improving diagnostic accuracy, reducing inter-reader variability, and advancing precision medicine through multimodal AI integration.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。