Large-scale convolutional neural network for clinical target and multi-organ segmentation in gynecologic brachytherapy via multi-stage learning

基于多阶段学习的大规模卷积神经网络在妇科近距离放射治疗临床靶区和多器官分割中的应用

阅读:1

Abstract

BACKGROUND: Accurate segmentation of high-risk clinical target volume (HRCTV) and organs-at-risk (OARs) is crucial for optimizing gynecologic brachytherapy treatment planning. However, performing this segmentation on Computed Tomography (CT) images remains particularly challenging due to anatomical variability, limited soft-tissue contrast, and the scarcity of annotated datasets. Compared to other radiotherapy domains, CT-based gynecologic brachytherapy segmentation is notably underrepresented in benchmarking studies. PURPOSE: This study aims to improve the segmentation of HRCTV and OARs in gynecologic brachytherapy by introducing GynBTNet, a multi-stage learning framework. Through large-scale self-supervised pretraining and progressive finetuning, the model is designed to enhance anatomical representation learning and adapt effectively to domain-specific gynecologic structures, addressing the challenges of limited training data and complex anatomical variability. METHODS: GynBTNet employs a three-stage training strategy: (1) self-supervised pretraining on large-scale CT datasets using sparse submanifold convolution to capture robust anatomical representations, (2) supervised finetuning on a multi-organ segmentation dataset to refine feature extraction, and (3) task-specific finetuning on the gynecologic brachytherapy dataset to optimize segmentation performance for clinical applications. In the third stage, 116 cases were used for training, while 29 cases were reserved for independent testing. The model was evaluated against state-of-the-art methods using the Dice Similarity Coefficient (DSC), 95th percentile Hausdorff Distance (HD95%), and Average Surface Distance (ASD). Overall statistical significance across models was assessed using the Friedman test. Post hoc pairwise comparisons were conducted using two-tailed paired permutation tests, with multi-comparison correction via the Benjamini-Hochberg procedure to control the false discovery rate. Cohen's effect sizes were calculated to quantify the performance differences. RESULTS: GynBTNet demonstrated consistent superiority over nnU-Net across all structures and achieved overall favorable performance compared to Swin-UNETR. The most substantial improvement was observed in HRCTV segmentation, where GynBTNet achieved a DSC of 0.837 ± 0.068, significantly higher than nnU-Net (p < 0.05) with a large effect size of +1.25, and superior to Swin-UNETR with a moderate-to-large effect size of +0.57. Boundary precision for HRCTV also improved significantly, with effect sizes in HD95% (-0.81 vs. nnU-Net, -0.52 vs. Swin-UNETR) and ASD (-1.20 vs. nnU-Net, -0.61 vs. Swin-UNETR). For bladder segmentation, GynBTNet reached a DSC of 0.940 ± 0.052, significantly outperforming nnU-Net (p < 0.05) with a large effect size of +1.28 and showing a small advantage over Swin-UNETR (effect size +0.26). In rectum segmentation, GynBTNet achieved a DSC of 0.842 ± 0.070, significantly exceeding nnU-Net (p < 0.05) with a large effect size of +1.17, and surpassing Swin-UNETR with an effect size of +0.54. For the uterus, GynBTNet significantly improved boundary accuracy compared to both nnU-Net and Swin-UNETR (p < 0.05), with effect sizes in ASD of -0.99 and -0.64. Segmentation of the sigmoid colon remained challenging, as GynBTNet provided only marginal DSC gains over nnU-Net with negligible effect sizes. CONCLUSIONS: The proposed multi-stage learning strategy effectively enhances segmentation accuracy for gynecologic brachytherapy, leveraging large-scale self-supervised pretraining and progressive finetuning. By improving HRCTV and OARs delineation, GynBTNet has the potential to enhance treatment planning precision, minimize radiation exposure to critical structures, and improve patient outcomes.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。