Abstract
Deep learning (DL) models are widely adopted in biomedical imaging, where image segmentation is increasingly recognized as a quantitative tool for extracting clinically meaningful information. However, model performance critically depends on dataset size and training configuration, including model capacity. Traditional sample size estimation methods are inadequate for DL due to its reliance on high-dimensional data and its nonlinear learning behavior. To address this gap, we propose a DL-specific framework to estimate the minimal dataset size required for stable segmentation performance. We validate this framework across two distinct clinical tasks: colorectal polyp segmentation from 2D endoscopic images (Kvasir-SEG) and glioma segmentation from 3D brain MRIs (BraTS 2020). We trained residual U-Nets-a simple, yet foundational architecture-across 200 configurations for Kvasir-SEG and 40 configurations for BraTS 2020, varying data subsets (2%-100% for the 2D task and 5%-100% for the 3D task). In both tasks, performance metrics such as the Dice Similarity Coefficient (DSC) consistently improved with increasing data and depth, but gains invariably plateaued beyond approximately 80% data usage. The best configuration for polyp segmentation (6 layers, 100% data) achieved a DSC of 0.86, while the best for brain tumor segmentation reached a DSC of 0.79. Critically, we introduce a surrogate modeling pipeline using Long Short-Term Memory (LSTM) networks to predict these performance curves. A simple uni-directional LSTM model accurately forecasted the final DSC, accurately forecasting the final DSC with low mean absolute error across both tasks. These findings demonstrate that segmentation performance can be reliably estimated with lightweight models, suggesting that collecting a moderate amount of high-quality data is often sufficient for developing clinically viable DL models. Our framework provides a practical, empirical method for optimizing resource allocation in medical AI development.