Abstract
Background/Objectives: Results from a well-designed trial provide evidence to support approval of truly effective treatments or discontinuation of ineffective treatments. However, the information available at the time of trial design may be limited which may lead to underpowered trials. This work aims to evaluate the impact of design assumption misspecifications on the statistical power of randomized trials with survival outcomes. Methods: The impact of the design assumption misspecifications on statistical power of four different statistical methods was investigated in a simulation study. The methods include the log-rank test, MaxCombo test, the test of difference in survival probability, and test of difference in restricted mean survival time (RMST). The deviations considered include the survival rate in the control arm, the expected treatment effect in terms of magnitude and pattern, accrual rate, and drop-out rate. Results: Deviations in the control arm's survival distribution have no impact on the power of the log-rank and MaxCombo tests but it affects the trial duration since trials designed with these tests require the total number of events to be met before the final analysis can be conducted. Misspecified treatment effect has similar effect on the statistical power of all four methods. When the proportional hazards assumption is misspecified, the RMST is more robust with a larger early treatment effect, while the survival probability and the MaxCombo tests are more robust with a larger late treatment effect and crossing hazards. Conclusions: Selecting the appropriate statistical tests to design a trial depends on the goal of the trial, the mechanism of action of the experimental treatment, the survival quantity of clinical interest, and the pattern of the expected treatment effect. The final design should be based on assumptions that are as accurate as possible, and the potential impacts of deviations from these assumptions on the trial's statistical power should be carefully considered.