Toward the Best Generalizable Performance of Machine Learning in Modeling Omic and Clinical Data

迈向机器学习在组学和临床数据建模中的最佳泛化性能

阅读:1

Abstract

There are often performance differences between intra-data set and cross-data set tests in machine learning (ML) modeling. However, reducing these differences may reduce ML performance. It is thus a challenging dilemma for developing models that excel in intra-data set testing and are generalizable to cross-data set testing. Therefore, we aimed to understand and improve the performance and generalizability of ML in intra-data set and cross-data set testing. We evaluated 4200 ML models of classifying lung adenocarcinoma deaths using The Cancer Genome Atlas (n = 286) and Oncogenomic-Singapore (n = 167) data sets and 1680 models of classifying glioblastoma deaths using The Cancer Genome Atlas (n = 151) and Clinical Proteomic Tumor Analysis Consortium (n = 97) data sets. After examining performance distributions of these ML models, we applied a dual analytical framework, including statistical analyses and SHapley Additive exPlanations-based meta-analysis, to quantify factors' importance and trace model success back to design principles. We also developed a framework to identify the best generalizable model. Strikingly, the Jarque-Bera test revealed significant deviations of model performances from normality in both cancer types and testing contexts. Simple linear models with sparse feature sets consistently dominated in lung adenocarcinoma experiments, whereas nonlinear models dominated in glioblastoma ones, suggesting that the best modeling strategy appears to be cancer type/disease dependent. Importantly, both robust analysis of variance and Kruskal-Wallis tests consistently identified differentially expressed genes as one of the most influential factors in both cancer types. The proposed multicriteria framework successfully identified the model that achieved both the best cross-data set performance and similar intra-data set performance. In summary, ML performance distributions significantly deviated from normality, which motivates using both robust parametric and nonparametric statistical tests. We quantified and provided possible exploitability on the factors associated with cross-data set performances and generalizability of ML models in 2 cancer types. A multicriteria framework was developed and validated to identify the models that are accurate and consistently robust across data sets.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。