Abstract
There are often performance differences between intra-data set and cross-data set tests in machine learning (ML) modeling. However, reducing these differences may reduce ML performance. It is thus a challenging dilemma for developing models that excel in intra-data set testing and are generalizable to cross-data set testing. Therefore, we aimed to understand and improve the performance and generalizability of ML in intra-data set and cross-data set testing. We evaluated 4200 ML models of classifying lung adenocarcinoma deaths using The Cancer Genome Atlas (n = 286) and Oncogenomic-Singapore (n = 167) data sets and 1680 models of classifying glioblastoma deaths using The Cancer Genome Atlas (n = 151) and Clinical Proteomic Tumor Analysis Consortium (n = 97) data sets. After examining performance distributions of these ML models, we applied a dual analytical framework, including statistical analyses and SHapley Additive exPlanations-based meta-analysis, to quantify factors' importance and trace model success back to design principles. We also developed a framework to identify the best generalizable model. Strikingly, the Jarque-Bera test revealed significant deviations of model performances from normality in both cancer types and testing contexts. Simple linear models with sparse feature sets consistently dominated in lung adenocarcinoma experiments, whereas nonlinear models dominated in glioblastoma ones, suggesting that the best modeling strategy appears to be cancer type/disease dependent. Importantly, both robust analysis of variance and Kruskal-Wallis tests consistently identified differentially expressed genes as one of the most influential factors in both cancer types. The proposed multicriteria framework successfully identified the model that achieved both the best cross-data set performance and similar intra-data set performance. In summary, ML performance distributions significantly deviated from normality, which motivates using both robust parametric and nonparametric statistical tests. We quantified and provided possible exploitability on the factors associated with cross-data set performances and generalizability of ML models in 2 cancer types. A multicriteria framework was developed and validated to identify the models that are accurate and consistently robust across data sets.