Integrated ensemble CNN and explainable AI for COVID-19 diagnosis from CT scan and X-ray images

集成集成卷积神经网络和可解释人工智能技术,用于从CT扫描和X光图像中诊断COVID-19

阅读:1

Abstract

In light of the ongoing battle against COVID-19, while the pandemic may eventually subside, sporadic cases may still emerge, underscoring the need for accurate detection from radiological images. However, the limited explainability of current deep learning models restricts clinician acceptance. To address this issue, our research integrates multiple CNN models with explainable AI techniques, ensuring model interpretability before ensemble construction. Our approach enhances both accuracy and interpretability by evaluating advanced CNN models on the largest publicly available X-ray dataset, COVIDx CXR-3, which includes 29,986 images, and the CT scan dataset for SARS-CoV-2 from Kaggle, which includes a total of 2,482 images. We also employed additional public datasets for cross-dataset evaluation, ensuring a thorough assessment of model performance across various imaging conditions. By leveraging methods including LIME, SHAP, Grad-CAM, and Grad-CAM++, we provide transparent insights into model decisions. Our ensemble model, which includes DenseNet169, ResNet50, and VGG16, demonstrates strong performance. For the X-ray image dataset, sensitivity, specificity, accuracy, F1-score, and AUC are recorded at 99.00%, 99.00%, 99.00%, 0.99, and 0.99, respectively. For the CT image dataset, these metrics are 96.18%, 96.18%, 96.18%, 0.9618, and 0.96, respectively. Our methodology bridges the gap between precision and interpretability in clinical settings by combining model diversity with explainability, promising enhanced disease diagnosis and greater clinician acceptance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。