Robust lumen segmentation based on temporal residual U-Net using spatiotemporal features in intravascular optical coherence tomography images

基于时间残差U-Net的稳健管腔分割方法,利用血管内光学相干断层扫描图像的时空特征

阅读:1

Abstract

SIGNIFICANCE: Lumen segmentation in intravascular optical coherence tomography (IVOCT) images is essential for quantifying vascular stenosis severity, location, and length. Current methods relying on manual parameter tuning or single-frame spatial features struggle with image artifacts, limiting clinical utility. AIM: We aim to develop a temporal residual U-Net (TR-Unet) leveraging spatiotemporal feature fusion for robust IVOCT lumen segmentation, particularly in artifact-corrupted images. APPROACH: We integrate convolutional long short-term memory networks to capture vascular morphology evolution across pullback sequences, enhanced ResUnet for spatial feature extraction, and coordinate attention mechanisms for adaptive spatiotemporal fusion. RESULTS: By processing 2451 clinical images, the proposed TR-Unet model shows a well performance as Dice coefficient = 98.54%, Jaccard similarity (JS) = 97.17%, and recall = 98.26%. Evaluations on severely blood artifact-corrupted images reveal improvements of 3.01% (Dice), 1.3% (ACC), 5.24% (JS), 2.15% (recall), and 2.06% (precision) over competing methods. CONCLUSIONS: TR-Unet establishes a robust and effective spatiotemporal fusion paradigm for IVOCT segmentation, demonstrating significant robustness to artifacts and providing architectural insights for temporal modeling optimization.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。