Deep learning model applied to real-time delineation of colorectal polyps

深度学习模型应用于结直肠息肉的实时勾画

阅读:2

Abstract

BACKGROUND: Deep learning models have shown considerable potential to improve diagnostic accuracy across medical fields. Although YOLACT has demonstrated real-time detection and segmentation in non-medical datasets, its application in medical settings remains underexplored. This study evaluated the performance of a YOLACT-derived Real-time Polyp Delineation Model (RTPoDeMo) for real-time use on prospectively recorded colonoscopy videos. METHODS: Twelve combinations of architectures, including Mask-RCNN, YOLACT, and YOLACT++, paired with backbones such as ResNet50, ResNet101, and DarkNet53, were tested on 2,188 colonoscopy images with three image resolution sizes. Dataset preparation involved pre-processing and segmentation annotation, with optimized image augmentation. RESULTS: RTPoDeMo, using YOLACT-ResNet50, achieved 72.3 mAP and 32.8 FPS for real-time instance segmentation based on COCO annotations. The model performed with a per-image accuracy of 99.59% (95% CI: [99.45 - 99.71%]), sensitivity of 90.63% (95% CI: [78.95 - 93.64%]), specificity of 99.95% (95% CI: [99.93 - 99.97%]) and a F1-score of 0.94 (95% CI: [0.87-0.98]). In validation, out of 36 polyps detected by experts, RTPoDeMo missed only one polyp, compared to six missed by senior endoscopists. The model demonstrated good agreement with experts, reflected by a Cohen's Kappa coefficient of 0.72 (95% CI: [0.54-1.00], p < 0.0001). CONCLUSIONS: Our model provides new perspectives in the adaptation of YOLACT to the real-time delineation of colorectal polyps. In the future, it could improve the characterization of polyps to be resected during colonoscopy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。