Abstract
Quantifying the uncertainty associated with a QSAR prediction is hugely valuable. Conformal regression and Venn-ABERS have emerged as state-of-the-art uncertainty estimation methods for regression and classification QSAR models, respectively. However, their performance is limited when they are applied to compounds sampled from a different distribution to the data used to train the model and/or calibrate their uncertainty estimates. Previous studies have evidenced this when applying these methods to nonrandom train/test splits, e.g., temporal validation, cluster or scaffold splits. Building on these previous studies, we demonstrate that explicit applicability domain calculations, using only structural similarity, can help determine when these uncertainty estimates are less reliable for molecules encountered after model building. By less reliable, we mean the uncertainty estimates for out-of-domain predictions are less likely to reflect the empirically observed model residuals (regression) or probability of observing the predicted class experimentally (classification). After briefly comparing different methods using exemplar data sets, we extensively investigated the implications of computed applicability domain status for uncertainty estimation reliability using a k-nearest neighbors applicability domain approach (nUNC), in combination with Cross-Venn-ABERS Predictors (classification) or Aggregated Conformal Prediction (regression) uncertainty estimation across a wide range of public data sets. Because these are more representative of real-world applications, we focus on the results obtained on nonrandom test sets: temporal and cluster splits defined in previous modeling studies. We also present results for multiple temporal splits (time-splits) of classification and regression industrial data sets. In most cases, we found that nUNC was capable of distinguishing between molecules where the uncertainty estimates were, on average, more (inside the domain) vs less (outside the domain) reliable.