Abstract
Background/Objectives: Non-variceal upper gastrointestinal bleeding (NVUGIB) is associated with considerable morbidity and mortality, particularly in emergency department (ED) settings. While traditional clinical scores such as the Glasgow-Blatchford Score (GBS), AIMS65, and Pre-Endoscopic Rockall are widely used for risk stratification, their accuracy in mortality prediction is limited. This study aimed to evaluate the performance of multiple supervised machine learning (ML) models in predicting 30-day all-cause mortality in NVUGIB and to compare these models with established risk scores. Methods: A retrospective cohort study was conducted on 1233 adult patients with NVUGIB who presented to the ED of a tertiary center between January 2022 and January 2025. Clinical and laboratory data were extracted from electronic records. Seven supervised ML algorithms-logistic regression, ridge regression, support vector machine, random forest, extreme gradient boosting (XGBoost), naïve Bayes, and artificial neural networks-were trained using six feature selection techniques generating 42 distinct models. Performance was assessed using AUROC, F1-score, sensitivity, specificity, and calibration metrics. Traditional scores (GBS, AIMS65, Rockall) were evaluated in parallel. Results: Among the cohort, 96 patients (7.8%) died within 30 days. The best-performing ML model (XGBoost with univariate feature selection) achieved an AUROC > 0.80 and F1-score of 0.909, significantly outperforming all traditional scores (highest AUROC: Rockall, 0.743; p < 0.001). ML models demonstrated higher sensitivity and specificity, with improved calibration. Key predictors consistently included age, comorbidities, hemodynamic parameters, and laboratory markers. The best-performing ML models demonstrated very high apparent AUROC values (up to 0.999 in internal analysis), substantially exceeding conventional scores. These results should be interpreted as apparent performance estimates, likely optimistic in the absence of external validation. Conclusions: While machine-learning models showed markedly higher apparent discrimination than conventional scores, these findings are based on a single-center retrospective dataset and require external multicenter validation before clinical implementation.