Explainable artificial intelligence for predicting medical students' performance in comprehensive assessments

利用可解释人工智能预测医学生在综合评估中的表现

阅读:1

Abstract

Comprehensive medical assessments are critical for evaluating clinical proficiency in medical education; however, these administrations impose significant institutional burdens, financial costs, and psychological strain on students. While Artificial intelligence (AI) holds transformative potential for predictive analytics, existing models lack the interpretability and reliability required for educational decision-making. To address this gap, a machine learning (ML) framework enhanced with explainable AI (XAI) was developed to predict medical students' performance on comprehensive assessments by integrating academic metrics and non-academic attributes. This retrospective cohort study validated the framework across three universities using two high-stakes assessments: the Comprehensive Medical Pre-Internship Examination (CMPIE; n = 997 students, two-month prediction horizon) and the Clinical Competence Assessment (CCAs; n = 777 students, one-year horizon). A stacking meta-model that combined ensemble techniques (Random Forest, Adaptive Boosting, XGBoost) demonstrated outstanding discriminative performance, with AUC-ROC values of 0.97 (CMPIEs) and 0.99 (CCAs) as well as F1-scores (0.966, 0.994). In this framework, SHapley Additive exPlanations (SHAP) provided granular insights into model logic by identifying high-impact courses as dominant predictors of success and individualized risk profiles. These insights empower educators to prioritize curriculum reforms and implement early interventions for at-risk students while delivering personalized feedback for learners to enhance learning outcomes.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。