Integrating Foundation Model Features into Graph Neural Network and Fusing Predictions with Standard Fine-Tuned Models for Histology Image Classification

将基础模型特征集成到图神经网络中,并将预测结果与标准微调模型融合,用于组织学图像分类

阅读:1

Abstract

Histopathological image classification using computational methods such as fine-tuned convolutional neural networks (CNNs) has gained significant attention in recent years. Graph neural networks (GNNs) have also emerged as strong alternatives, often employing CNNs or vision transformers (ViTs) as node feature extractors. However, as these models are usually pre-trained on small-scale natural image datasets, their performance in histopathology tasks can be limited. The introduction of foundation models trained on large-scale histopathological data now enables more effective feature extraction for GNNs. In this work, we integrate recently developed foundation models as feature extractors within a lightweight GNN and compare their performance with standard fine-tuned CNN and ViT models. Furthermore, we explore a prediction fusion approach that combines the outputs of the best-performing GNN and fine-tuned model to evaluate the benefits of complementary representations. Results demonstrate that GNNs utilizing foundation model features outperform those trained with CNN or ViT features and achieve performance comparable to standard fine-tuned CNN and ViT models. The highest overall performance is obtained with the proposed prediction fusion strategy. Evaluated on three publicly available datasets, the best fusion achieved F1-scores of 98.04%, 96.51%, and 98.28%, and balanced accuracies of 98.03%, 96.50%, and 97.50% on PanNuke, BACH, and BreakHis, respectively.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。