Abstract
Delayed treatment effects on time-to-event outcomes are commonly observed in randomized controlled trials of cancer immunotherapies. When the treatment effect has a delayed onset, the conventional test/estimation approach-using the log-rank test for between-group comparison and Cox's hazard ratio to quantify the treatment effect-can be suboptimal. The log-rank test may lack power in such scenarios, and the interpretation of the hazard ratio is often ambiguous. Recently, alternative test/estimation approaches have been proposed to address these limitations. One such approach is based on long-term restricted mean survival time (LT-RMST), while another is based on average hazard with survival weight (AH-SW). This paper integrates these two concepts and introduces a novel long-term average hazard (LT-AH) approach with survival weight for both hypothesis testing and estimation. Numerical studies highlight specific scenarios where the proposed LT-AH method achieves higher power than the existing alternatives. The LT-AH for each group can be estimated nonparametrically, and the proposed between-group comparison maintains test/estimation coherency. Because the difference and ratio of LT-AH do not rely on model assumptions about the relationship between two groups, the LT-AH approach provides a robust framework for estimating the magnitude of between-group differences. Furthermore, LT-AH allows for treatment effect quantification in both absolute (difference in LT-AH) and relative (ratio of LT-AH) terms, aligning with guideline recommendations and addressing practical needs. Given its interpretability and improved power in certain settings, the proposed LT-AH approach offers a useful alternative to conventional hazard-based methods, particularly when delayed treatment effects are expected.