Abstract
BACKGROUND: Iron-deficiency anaemia (IDA) is the most prevalent hematological disorder globally, significantly impairing physical capacity and cognitive development. Early detection is crucial; however, conventional diagnostic methods often have limitations. Machine learning (ML) can enhance predictive accuracy by analyzing complex hematological parameters, including reticulocyte maturation indices. This study aimed to develop and compare multiple ML algorithms for IDA prediction and to identify the most effective model for potential clinical use. METHODS: This retrospective cross-sectional study examined anonymized data from 444 adults (20–59 years) at Shahid Mohammadi Hospital, Iran, collected between November 2024 and January 2025. Five supervised ML algorithms including random forest (RF), support vector machine (SVM), decision tree (DT), naive Bayes (NB), and CatBoost were trained using a 70/30 train–test split. Stratified 10-fold cross-validation and the synthetic minority oversampling technique (SMOTE) were applied for hyperparameter optimization and class-imbalance adjustment. Model performance was evaluated using accuracy, sensitivity, specificity, F1-score, AUC-ROC, and the Matthews correlation coefficient (MCC). SHapley additive explanations (SHAP) values were used to characterize feature importance. RESULTS: Dual-criteria labelling identified 350 of 444 participants (79%) as having IDA. In 10-fold cross-validation of the training set (n = 310), CatBoost achieved the highest AUC-ROC (0.98), followed by RF (0.97), NB (0.91), SVM (0.90), and DT (0.88). On the independent test set (n = 134), CatBoost maintained superior performance, with an AUC-ROC of 0.93 (95% confidence interval (CI) 0.89–0.97), sensitivity of 0.92 (95% CI 0.88–0.96), specificity of 0.86 (95% CI 0.74–0.94), and MCC of 0.73 (95% CI 0.60–0.84). SHAP analysis identified haemoglobin (HGB), haematocrit (HCT), and red blood cell count (RBC) as the most influential predictors, underscoring the model’s physiological interpretability. CONCLUSION: ML models, especially CatBoost, accurately predict IDA using standard blood tests and reticulocyte indices. The use of explainable artificial intelligence (AI) enables physiologically interpretable predictions, which can significantly support earlier detection, more efficient resource use, and improved clinical decision-making. These results underscore the potential of incorporating reticulocyte maturation markers into ML-based diagnostic tools in hematology, offering a promising future of more efficient and effective resource utilization in clinical settings.