ADMM-TransNet: ADMM-Based Sparse-View CT Reconstruction Method Combining Convolution and Transformer Network

ADMM-TransNet:基于ADMM的稀疏视图CT重建方法,结合卷积和Transformer网络

阅读:1

Abstract

BACKGROUND: X-ray computed tomography (CT) imaging technology provides high-precision anatomical visualization of patients and has become a standard modality in clinical diagnostics. A widely adopted strategy to mitigate radiation exposure is sparse-view scanning. However, traditional iterative approaches require manual design of regularization priors and laborious parameter tuning, while deep learning methods either heavily depend on large datasets or fail to capture global image correlations. METHODS: Therefore, this paper proposes a combination of model-driven and data-driven methods, using the ADMM iterative algorithm framework to constrain the network to reduce its dependence on data samples and introducing the CNN and Transformer model to increase the ability to learn the global and local representation of images, further improving the accuracy of the reconstructed image. RESULTS: The quantitative and qualitative results show the effectiveness of our method for sparse-view reconstruction compared with the current most advanced reconstruction algorithms, achieving a PSNR of 42.036 dB, SSIM of 0.979, and MAE of 0.011 at 32 views. CONCLUSIONS: The proposed algorithm has effective capability in sparse-view CT reconstruction. Compared with other deep learning algorithms, the proposed algorithm has better generalization and higher reconstruction accuracy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。