Automatic stent recognition using perceptual attention U-net for quantitative intrafraction motion monitoring in pancreatic cancer radiotherapy

利用感知注意力 U-Net 进行自动支架识别,以定量监测胰腺癌放射治疗中的分次内运动

阅读:1

Abstract

PURPOSE: Stent has often been used as an internal surrogate to monitor intrafraction tumor motion during pancreatic cancer radiotherapy. Based on the stent contours generated from planning CT images, the current intrafraction motion review (IMR) system on Varian TrueBeam only provides a tool to verify the stent motion visually but lacks quantitative information. The purpose of this study is to develop an automatic stent recognition method for quantitative intrafraction tumor motion monitoring in pancreatic cancer treatment. METHODS: A total of 535 IMR images from 14 pancreatic cancer patients were retrospectively selected in this study, with the manual contour of the stent on each image serving as the ground truth. We developed a deep learning-based approach that integrates two mechanisms that focus on the features of the segmentation target. The objective attention modeling was integrated into the U-net framework to deal with the optimization difficulties when training a deep network with 2D IMR images and limited training data. A perceptual loss was combined with the binary cross-entropy loss and a Dice loss for supervision. The deep neural network was trained to capture more contextual information to predict binary stent masks. A random-split test was performed, with images of ten patients (71%, 380 images) randomly selected for training, whereas the rest of four patients (29%, 155 images) were used for testing. Sevenfold cross-validation of the proposed PAUnet on the 14 patients was performed for further evaluation. RESULTS: Our stent segmentation results were compared with the manually segmented contours. For the random-split test, the trained model achieved a mean (±standard deviation) stent Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), the center-of-mass distance (CMD), and volume difference Voldiff were 0.96 (±0.01), 1.01 (±0.55) mm, 0.66 (±0.46) mm, and 3.07% (±2.37%), respectively. The sevenfold cross-validation of the proposed PAUnet had the mean (±standard deviation) of 0.96 (±0.02), 0.72 (±0.49) mm, 0.85 (±0.96) mm, and 3.47% (±3.27%) for the DSC, HD95, CMD, and Voldiff . CONCLUSION: We developed a novel deep learning-based approach to automatically segment the stent from IMR images, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for quantitative intrafraction motion monitoring in pancreatic cancer radiotherapy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。