Abstract
K-fold cross-validation is a widely used technique for estimating the generalisation of the performance of supervised machine learning models. However, the effect of the number of folds (k) on bias-variance behaviour across models and datasets is not fully understood. This study examines how varying k, from 3 to 20, relates to estimates of bias and variance across four classification algorithms, evaluated on twelve datasets of varying sizes. These four algorithms are Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), and k-Nearest Neighbours (KNN). We operationalise bias as the difference between the mean cross-validated training accuracy and the held-out test accuracy, and variance as the variability of accuracy across folds. Across all algorithms and datasets considered, variance increased as k grew, indicating that larger k values can yield less stable fold-to-fold estimates in our setting. Bias trends were algorithm- and dataset-dependent: KNN and SVM most frequently showed upward bias with increasing k, whereas DT was comparatively balanced, and LR showed mixed patterns. These findings, while limited to the models, metrics, and datasets studied, suggest that default choices of fixed k (e.g., 5 or 10) may not be universally optimal. We provide code and data preprocessing scripts to enable full replication and encourage further investigation into adaptive, model- and data-sensitive validation strategies.