Explainable Artificial Intelligence in Healthcare: Current Landscape, Challenges, and Future Directions

医疗保健领域的可解释人工智能:现状、挑战与未来方向

阅读:1

Abstract

BACKGROUND AND AIMS: Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), is transforming healthcare by enabling improved diagnosis, prognosis, and personalized treatments. However, the opacity of many AI models operates as “black boxes,” limiting interperability, clinician trust, and real‐world adoption. Explainable Artificial Intelligence (XAI) has emerged to address these limitations by providing transparent and actionable insights. This systematic review aims to synthesize the current evidence on XAI in healthcare, mapping AI models to XAI techniques, domains, and clinical applications. METHODS: A systematic search was conducted across six databases (Elsevier, Springer, Taylor & Francis, Semantic Scholar, ACM, and IEEE Xplore) for peer‐reviewed published between 2017 and 2025. After duplicate removal and title/abstract screening, full texts were evaluated against predefined inclusion/exclusion criteria, following PRISMA guidelines. Data extraction included AI model types, XAI techniques, healthcare domains, study design, validation methods, and ethical/regulatory reporting. RESULTS: Seventy studies were included, spanning oncology (40%), cardiology (21%), infectious diseases (14%), neurology (11%), and clinical decision support systems (13%). Deep learning models (CNN, RNN, LSTM, and Transformers) were most frequently applied (76%), followed by tree‐based models (Random Forest, XGBoost, Decision Trees; 24%). SHAP (54%) and LIME (30%) were the most commonly used XAI techniques, with Grad‐CAM (23%) and attention mechanisms (20%) applied mainly in imaging and sequence‐based tasks. Only 12 studies explicitly addressed ethical or regulatory considerations. Hybrid interpretable models and human‐centered designs are emerging trends, but real‐world validation and standardized interpretability metrics remain limited. CONCLUSION: XAI enhances transparency, clinician trust, and decision‐making in healthcare AI applications, yet challenges persist, including inconsistent validation, underdeveloped ethical/regulatory frameworks, and lack of standardized interpretability measures. Future work should focus on hybrid, clinically validated XAI models, comprehensive ethical compliance, and user‐centered, domain‐specific implementations to ensure safe and effective integration into clinical practice.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。