Abstract
The Benchmark Dose (BMD) approach is commonly used to determine Point-of-Departure (PoD) values for risk assessment and regulatory decision-making; however, choosing a suitable Benchmark Response (BMR) for continuous endpoints is a challenge. Earlier work established a BMR of 50% for selected in vivo mutagenicity endpoints (i.e., Transgenic Rodent and Pig-a). Error-corrected sequencing (ECS) technologies, such as Duplex Sequencing (DupSeq), Hawk-Seq, PECC-Seq, and PacBio HiFi, have emerged as powerful tools for mutagenicity assessment. This study applied and compared two approaches for defining BMR values for ECS technologies: the Effect Size (ES) theory of Slob (2017), and the one standard deviation approach of Zeller et al. (2017). A dose-response database of ECS studies was compiled to determine technology-specific within-group variance values (var) for BMR determination. Experimental factor influences on var, including species, rodent strain, administration route, application time, tissue type, tissue sampling time, and DNA fragmentation method, were examined; no significant influences were detected. The absence of covariate effects justified using typical, technology-specific var values for BMR determinations. Using these values, technology-specific BMRs were calculated as 27.7% for DupSeq, 16.6% for Hawk-Seq, and 23.3% for PECC-Seq. BMRs derived from negative control values were 22.6 to 28.8% for DupSeq, 5.6 to 13.8% for Hawk-Seq, 28.7 to 31.5% for PECC-Seq, and 9.5 to 22.8% for PacBio HiFi. These findings support adoption of a 30% BMR for in vivo ECS mutagenicity assessment technologies, providing a robust and consistent foundation for future dose-response modeling and human health risk assessment.