Abstract
Chronic obstructive pulmonary disease (COPD) is a major health burden worldwide and in Taiwan, ranking as the third leading cause of death globally, and its prevalence in Taiwan continues to rise. Readmission within 14 days is a key indicator of disease instability and care efficiency, driven jointly by patient-level physiological vulnerability (such as reduced lung function and multiple comorbidities) and healthcare system-level deficiencies in transitional care. To mitigate the growing burden and improve quality of care, it is urgently necessary to develop an AI-based prediction model for 14-day readmission. Such a model could enable early identification of high-risk patients and trigger multidisciplinary interventions, such as pulmonary rehabilitation and remote monitoring, to effectively reduce avoidable early readmissions. However, medical data are commonly characterized by severe class imbalance, which limits the ability of conventional machine learning methods to identify minority-class cases. In this study, we used real-world clinical data from multiple hospitals in Kaohsiung City to construct a prediction framework that integrates data generation and ensemble learning to forecast readmission risk among patients with chronic obstructive pulmonary disease (COPD). CTGAN and kernel density estimation (KDE) were employed to augment the minority class, and the impact of these two generation approaches on model performance was compared across different augmentation ratios. We adopted a stacking architecture composed of six base models as the core framework and conducted systematic comparisons against the baseline models XGBoost, AdaBoost, Random Forest, and LightGBM across multiple recall thresholds, different feature configurations, and alternative data generation strategies. Overall, the results show that, under high-recall targets, KDE combined with stacking achieves the most stable and superior overall performance relative to the baseline models. We further performed ablation experiments by sequentially removing each base model to evaluate and analyze its contribution. The results indicate that removing KNN yields the greatest negative impact on the stacking classifier, particularly under high-recall settings where the declines in precision and F1-score are most pronounced, suggesting that KNN is most sensitive to the distributional changes introduced by KDE-generated data. This configuration simultaneously improves precision, F1-score, and specificity, and is therefore adopted as the final recommended model setting in this study.