Background Subtraction Angiography with Deep Learning Using Multi-frame Spatiotemporal Angiographic Input

基于深度学习的多帧时空血管造影输入背景减影血管造影

阅读:1

Abstract

Catheter Digital Subtraction Angiography (DSA) is markedly degraded by all voluntary, respiratory, or cardiac motion artifact that occurs during the exam acquisition. Prior efforts directed toward improving DSA images with machine learning have focused on extracting vessels from individual, isolated 2D angiographic frames. In this work, we introduce improved 2D + t deep learning models that leverage the rich temporal information in angiographic timeseries. A total of 516 cerebral angiograms were collected with 8784 individual series. We utilized feature-based computer vision algorithms to separate the database into "motionless" and "motion-degraded" subsets. Motion measured from the "motion degraded" category was then used to create a realistic, but synthetic, motion-augmented dataset suitable for training 2D U-Net, 3D U-Net, SegResNet, and UNETR models. Quantitative results on a hold-out test set demonstrate that the 3D U-Net outperforms competing 2D U-Net architectures, with substantially reduced motion artifacts when compared to DSA. In comparison to single-frame 2D U-Net, the 3D U-Net utilizing 16 input frames achieves a reduced RMSE (35.77 ± 15.02 vs 23.14 ± 9.56, p < 0.0001; mean ± std dev) and an improved Multi-Scale SSIM (0.86 ± 0.08 vs 0.93 ± 0.05, p < 0.0001). The 3D U-Net also performs favorably in comparison to alternative convolutional and transformer-based architectures (U-Net RMSE 23.20 ± 7.55 vs SegResNet 23.99 ± 7.81, p < 0.0001, and UNETR 25.42 ± 7.79, p < 0.0001, mean ± std dev). These results demonstrate that multi-frame temporal information can boost performance of motion-resistant Background Subtraction Deep Learning algorithms, and we have presented a neuroangiography domain-specific synthetic affine motion augmentation pipeline that can be utilized to generate suitable datasets for supervised training of 3D (2d + t) architectures.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。