Deep Learning for Predicting Late-Onset Breast Cancer Metastasis: The Single-Hyperparameter Grid Search (SHGS) Strategy for Meta-Tuning a Deep Feed-Forward Neural Network

深度学习在预测晚期乳腺癌转移中的应用:基于单超参数网格搜索(SHGS)策略的深度前馈神经网络元调优

阅读:1

Abstract

Background: While machine learning has advanced in medicine, its widespread use in clinical applications, especially in predicting breast cancer metastasis, is still limited. We have been dedicated to constructing a deep feed-forward neural network (DFNN) model to predict breast cancer metastasis n years in advance. However, the challenge lies in efficiently identifying optimal hyperparameter values through grid search, given the constraints of time and resources. Issues such as the infinite possibilities for continuous hyperparameters like L1 and L2, as well as the time-consuming and costly process, further complicate the task. Methods: To address these challenges, we developed the Single-Hyperparameter Grid Search (SHGS) strategy, serving as a preselection method before grid search. Our experiments with SHGS applied to DFNN models for breast cancer metastasis prediction focused on analyzing eight target hyperparameters (epochs, batch size, dropout, L1, L2, learning rate, decay, and momentum). Results: We created three figures, each depicting the experimental results obtained from three LSM-I-10+-year datasets. These figures illustrate the relationship between model performance and the target hyperparameter values. Our experiments achieved maximum test AUC scores of 0.770, 0.762, and 0.886 for the 10-year, 12-year, and 15-year datasets, respectively. For each hyperparameter, we analyzed whether changes in this hyperparameter would affect model performance, examined whether there were specific patterns, and explored how to choose values for the hyperparameter. Conclusions: Our experimental findings reveal that the optimal value of a hyperparameter is not only dependent on the dataset but is also significantly influenced by the settings of other hyperparameters. Additionally, our experiments suggest a reduced range of values for a target hyperparameter, which may be helpful for "low-budget" grid search. This approach serves as a foundation for the subsequent use of grid search to enhance model performance.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。