Fully automated IVUS image segmentation with efficient deep-learning-assisted annotation

全自动血管内超声图像分割及高效的深度学习辅助标注

阅读:1

Abstract

Intravascular ultrasound (IVUS) image segmentation plays a critical role in the diagnosis, treatment planning, and monitoring of coronary artery disease. Although deep learning (DL) methods have achieved state-of-the-art (SOTA) results in various medical image segmentation tasks, effectively delivering clinically acceptable results remains challenging due to the limited availability of large annotated datasets. In this paper, we report an efficient deep learning framework for fully automated IVUS image segmentation that combines active learning and interaction of model outputs to dramatically reduce annotation effort both in image selection and annotation querying from human experts. We propose a two-branch network that integrates a spatial and channel-wise probability attention module into the segmentation network to segment lumen and plaque areas and simultaneously predict potential segmentation errors. With the introduction of segmentation quality assessment (SQA), we can quantify the quality of achieved segmentation on unannotated images and provide meaningful visual cues for human experts, assisting them to concentrate on the most relevant image samples, judiciously determine the most 'valuable' images for annotation and effectively employ adjudicated segmentations as the next-batch training annotations. The model performance is thus incrementally boosted via fine-tuning on the newly annotated datasets. We have evaluated our methods on a set of coronary IVUS data from 266 subjects and 38,771 cross-sectional frames by 5-fold cross-validation, demonstrating that our approach achieves SOTA segmentation performance using no more than 10% of training data and significantly reduces the annotation effort.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。