Utilizing customized CNN for brain tumor prediction with explainable AI

利用定制的卷积神经网络进行脑肿瘤预测,并结合可解释人工智能

阅读:1

Abstract

Timely diagnosis of brain tumors using MRI and its potential impact on patient survival are critical issues addressed in this study. Traditional DL models often lack transparency, leading to skepticism among medical experts owing to their "black box" nature. This study addresses this gap by presenting an innovative approach for brain tumor detection. It utilizes a customized Convolutional Neural Network (CNN) model empowered by three advanced explainable artificial intelligence (XAI) techniques: Shapley Additive Explana-tions (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM). The study utilized the BR35H dataset, which includes 3060 brain MRI images encompassing both tumorous and non-tumorous cases. The proposed model achieved a remarkable training accuracy of 100 % and validation accuracy of 98.67 %. Precision, recall, and F1 score metrics demonstrated exceptional performance at 98.50 %, confirming the accuracy of the model in tumor detection. Detailed result analysis, including a confusion matrix, comparison with existing models, and generalizability tests on other datasets, establishes the superiority of the proposed approach and sets a new benchmark for accuracy. By integrating a customized CNN model with XAI techniques, this research enhances trust in AI-driven medical diagnostics and offers a promising pathway for early tumor detection and potentially life-saving interventions.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。