Abstract
The lag phase is a temporary, nonreplicative period observed when a microbial population is introduced to a new, nutrient-rich environment. Although the theoretical concept of growth phases is clear, the practical application of methods for estimating lag lengths is often challenging. In fact, there are two distinct assumptions: (i) that cells do not divide at all during the lag phase or (ii) that they divide but at a suboptimal rate. Therefore, the choice of method should consider not only technical limitations but also consistency with the biological context. Here, we investigate the performance of the most common lag estimation methods, using empirical and simulated datasets. We apply different biological scenarios and simulate curves with varying parameters (i.e. growth rate, noise level, and frequency of measurements) to test their impact on the estimated lag phase duration. Our validation shows that infrequent measurements, low growth rate, longer lag phases, or higher level of noise in the measurements result in higher bias and higher variance of lag estimation. Additionally, in case of noisy data, the methods relying on model fitting perform best.