ViSwNeXtNet Deep Patch-Wise Ensemble of Vision Transformers and ConvNeXt for Robust Binary Histopathology Classification

ViSwNeXtNet:基于深度块的视觉变换器和ConvNeXt集成模型,用于鲁棒的二元组织病理学分类

阅读:1

Abstract

Background: Intestinal metaplasia (IM) is a precancerous gastric condition that requires accurate histopathological diagnosis to enable early intervention and cancer prevention. Traditional evaluation of H&E-stained tissue slides can be labor-intensive and prone to interobserver variability. Recent advances in deep learning, particularly transformer-based models, offer promising tools for improving diagnostic accuracy. Methods: We propose ViSwNeXtNet, a novel patch-wise ensemble framework that integrates three transformer-based architectures-ConvNeXt-Tiny, Swin-Tiny, and ViT-Base-for deep feature extraction. Features from each model (12,288 per model) were concatenated into a 36,864-dimensional vector and refined using iterative neighborhood component analysis (INCA) to select the most discriminative 565 features. A quadratic SVM classifier was trained using these selected features. The model was evaluated on two datasets: (1) a custom-collected dataset consisting of 516 intestinal metaplasia cases and 521 control cases, and (2) the public GasHisSDB dataset, which includes 20,160 normal and 13,124 abnormal H&E-stained image patches of size 160 × 160 pixels. Results: On the collected dataset, the proposed method achieved 94.41% accuracy, 94.63% sensitivity, and 94.40% F1 score. On the GasHisSDB dataset, it reached 99.20% accuracy, 99.39% sensitivity, and 99.16% F1 score, outperforming individual backbone models and demonstrating strong generalizability across datasets. Conclusions: ViSwNeXtNet successfully combines local, regional, and global representations of tissue structure through an ensemble of transformer-based models. The addition of INCA-based feature selection significantly enhances classification performance while reducing dimensionality. These findings suggest the method's potential for integration into clinical pathology workflows. Future work will focus on multiclass classification, multicenter validation, and integration of explainable AI techniques.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。