Development and validation of explainable machine learning models for predicting 3-month functional outcomes in acute ischemic stroke: a SHAP-based approach

开发和验证可解释机器学习模型以预测急性缺血性卒中患者3个月的功能预后:一种基于SHAP的方法

阅读:1

Abstract

OBJECTIVE: To develop and validate explainable machine learning models for predicting 3-month functional outcomes in acute ischemic stroke (AIS) patients using SHapley Additive exPlanations (SHAP) framework. METHODS: This retrospective cohort study included 538 AIS patients admitted within 72 h of symptom onset. Patients were randomly divided into training (70%) and validation (30%) sets. Clinical, laboratory, and imaging data were collected. Least Absolute Shrinkage and Selection Operator regression was used for feature selection. Five machine learning models were developed: support vector machine, k-nearest neighbors, random forest, gradient boosting machine (GBM), and convolutional neural network. Model performance was evaluated using area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. SHAP analysis was applied to the best-performing model to enhance interpretability. RESULTS: Among 538 patients (mean age 68.5 ± 12.7 years, 58.0% male), 34.2% had poor 3-month outcomes (mRS 3-6). The GBM achieved the best predictive performance with AUC of 0.91, accuracy of 0.81, sensitivity of 0.95, and specificity of 0.61 in validation set, significantly outperforming logistic regression (AUC = 0.78). The model demonstrated excellent calibration and superior net benefit in decision curve analysis across threshold probabilities of 0.1-0.7. SHAP analysis identified admission NIHSS score (30.8%), age (14.9%), and ASPECTS ≥7 (13.7%) as the most influential predictors, with neutrophil-to-lymphocyte ratio (10.1%) and platelet distribution width (9.7%) also contributing significantly to outcome prediction. CONCLUSION: Explainable machine learning models can accurately predict 3-month functional outcomes in AIS patients. The SHAP framework enhances model transparency, addressing interpretability barriers for clinical implementation while maintaining superior predictive performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。