The impact of K selection in K‑fold cross-validation on bias and variance in supervised learning models

K折交叉验证中K值选择对监督学习模型偏差和方差的影响

阅读:1

Abstract

K-fold cross-validation is a widely used technique for estimating the generalisation of the performance of supervised machine learning models. However, the effect of the number of folds (k) on bias-variance behaviour across models and datasets is not fully understood. This study examines how varying k, from 3 to 20, relates to estimates of bias and variance across four classification algorithms, evaluated on twelve datasets of varying sizes. These four algorithms are Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), and k-Nearest Neighbours (KNN). We operationalise bias as the difference between the mean cross-validated training accuracy and the held-out test accuracy, and variance as the variability of accuracy across folds. Across all algorithms and datasets considered, variance increased as k grew, indicating that larger k values can yield less stable fold-to-fold estimates in our setting. Bias trends were algorithm- and dataset-dependent: KNN and SVM most frequently showed upward bias with increasing k, whereas DT was comparatively balanced, and LR showed mixed patterns. These findings, while limited to the models, metrics, and datasets studied, suggest that default choices of fixed k (e.g., 5 or 10) may not be universally optimal. We provide code and data preprocessing scripts to enable full replication and encourage further investigation into adaptive, model- and data-sensitive validation strategies.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。