Clinician-Centric Explainable Artificial Intelligence Framework for Medical Imaging Diagnostics: A Systematic Review

以临床医生为中心的、可解释的医学影像诊断人工智能框架:系统性综述

阅读:1

Abstract

Medical imaging has evolved from conventional x-rays to advanced digital modalities, with artificial intelligence (AI), particularly deep learning, showing an increasingly central role in diagnostic support. This study presents a systematic literature review (SLR) of AI-driven medical imaging research focusing on classification-based models and explainability approaches in pneumonia detection. Using predefined inclusion criteria and PRISMA-guided screening, 95 studies were synthesized to identify dominant architectures, dataset trends, performance patterns, and persistent challenges. The analysis shows that convolutional neural networks (CNNs) and their variants remain the most frequently adopted models, accounting for the largest proportion of applications across x-ray, computed tomography scan (CT scan), and magnetic resonance imaging (MRI). Reported diagnostic performance across reviewed studies commonly exceeded 90% in accuracy and AUC, with models such as DeepMediX, XNet, Wavelet-CNN, and RadCLIP demonstrating strong predictive capability in their respective experimental settings. However, the review identifies significant gaps in explainability, clinical workflow integration, ethical compliance, and trust evaluation. Thus, this paper proposes a clinician-centric explainable artificial intelligence (CC-XAI) framework derived from literature synthesis. The framework integrates multilevel explainability, contextual clinical alignment, and human-in-the-loop feedback mechanisms to bridge the gap between black-box AI systems and real-world clinical practice. Rather than introducing a new predictive model, the framework provides a structured design blueprint for embedding explainability into medical imaging diagnostics. The findings highlight the continued dominance of deep learning in medical imaging while emphasizing the urgent need for clinician-oriented XAI frameworks to support transparency, trust, and responsible AI deployment in healthcare.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。