Symmetry-guided explainable deep learning for colon cancer diagnosis: model benchmarking, cross-validation, statistical analysis, and explainability via ablation studies

基于对称性引导的可解释深度学习在结肠癌诊断中的应用:模型基准测试、交叉验证、统计分析以及通过消融研究验证其可解释性

阅读:1

Abstract

INTRODUCTION: Histopathological tissue reveals natural radial and bilateral symmetry in glandular structures, which becomes progressively disrupted during malignant transformation. Leveraging this observation, this work presents a VGG16-based deep learning model enriched with symmetry-aware interpretation for early detection of Colon Adenocarcinoma. The traditional approaches are not straightforward enough and acts as "black boxes" diminishing their clinical adoption and acceptance in real-world scenario. Current research work uses the most recent breakthroughs in deep learning on medical imaging and integrates Explainable AI strategies such as LIME, SHAP, and Grad-CAM into the model to interpret how cancer-induced symmetry distortions influence model decisions. METHODS: This work is experimented on a balanced dataset of 10,000 histopathological scans, including 5,000 Colon Adenocarcinoma tissue samples and 5,000 Benign Colon Tissue samples. This research aims to shed light on how benign tissues preserve consistent symmetric glandular patterns; while cancerous samples exhibit pronounced asymmetry, irregular boundaries, and disrupted structural repetition. Authors further aim to quantify these differences using lightweight 2D symmetry indices, demonstrating a clear separation between normal and malignant tissues. RESULTS AND DISCUSSION: Current research presents a highly precise model for the diagnosis of colon cancer using a VGG16 CNN that achieves an encouraging test accuracy of 99.85%. The model exhibited very high precision, recall, and F1-scores for both classes, normal and cancer, as demonstrated by the classification report. Among various XAI techniques, Grad-CAM demonstrated speed and scalability making it an appropriate choice for its large-scale deployment in healthcare. SHAP, though computationally costly, offered theoretical robustness and great insight. LIME was handy in local interpretability, especially convenient in debugging individual predictions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。