Abstract
Time-to-event (TTE) machine learning (ML) algorithms are increasingly utilized in prognostic models, but systematic evaluation is lacking to identify their strengths and limitations. We compared TTE ML algorithms - Oblique Random Survival Forest (ORSF) and Random Survival Forest (RSF) to statistical models (SMs) - Cox Proportional Hazards (Cox PH) and Penalized Cox PH, examining their predictive performance and computational time. Eighteen scenarios were generated with varying censoring rates, sample sizes, and predictor effects, assuming the PH assumption. Performance was evaluated using Harrell's C-index and IBS, with differences assessed using One-Way Repeated Measures ANOVA. In the linear with additive effects scenario, SMs outperformed RSF in terms of C-indices and IBS scores, with negligible differences between ORSF variants and SMs. ORSF variants were slightly higher in C-indices and comparable in IBS scores to RSF. Under the non-linear scenario with interaction effects, SMs' models consistently achieved higher C-indices than RSF, with minimal differences from ORSF. SMs were similar to RSF and ORSF in IBS scores, except at a high censoring rate of 90%. ORSF yielded significantly higher C-indices and lower IBS scores than RSF at censoring rates of 50-70%. Overall, differences between ORSF variants in discrimination and calibration were not significant; however, ORSF-net had the longest training time among all ML models. Conclusively, RSF showed inferior discrimination to SMs and ORSF. Traditional SMs outperform ML models in TTE prediction at higher censoring rates but match ORSF at lower rates.