A Combined Loss-driven Framework for Automated Parotid Segmentation in Head-and-Neck Computed Tomography

一种基于损失的框架,用于头颈部计算机断层扫描中腮腺的自动分割

阅读:1

Abstract

PURPOSE: This study presents a deep learning framework for automatic parotid segmentation using three-dimensional (3D) U-Net and attention-augmented 3D U-Net architectures trained with a novel combined loss function tailored for anatomical accuracy and class imbalance. MATERIALS AND METHODS: A curated dataset of 379 noncontrast head-and-neck computed tomography scans with expert-verified contours was used. Two architectures a residual 3D U-Net and its attention-enhanced variant were implemented using TensorFlow. The networks were trained with both categorical cross-entropy and a proposed combined loss integrating modified Dice Score Coefficient (mDSC) and focal loss (FL) with weights 0.7 and 0.3. The models were evaluated using dice similarity coefficient (DSC), Intersection over Union (IoU), and categorical accuracy. A custom checkpointing strategy was designed to preserve model weights corresponding to both peak DSC and minimum validation loss. The code and pretrained models are hosted on a publicly available GitHub repository at: https://github.com/1aryantyagi/Segmentation-Paper. RESULTS: The 3D U-Net trained with the combined loss achieved a median Dice score of 0.8835 (left parotid) and 0.8709 (right), with mean IoU values of 0.7672 and 0.7358, indicating strong segmentation accuracy. The U-Net produced comparable results, supporting the combined loss's consistency. Bland-Altman analysis confirmed reduced variability and improved agreement. CONCLUSION: The integration of mDSC and FL within a 3D U-Net architecture significantly improves segmentation performance, robustness, and spatial precision. These findings support the clinical feasibility of the proposed framework for automated, reproducible parotid delineation in radiotherapy planning.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。