An untrained deep learning method for reconstructing dynamic MR images from accelerated model-based data

一种无需训练的深度学习方法,用于从加速的基于模型的数据中重建动态磁共振图像。

阅读:1

Abstract

PURPOSE: To implement physics-based regularization as a stopping condition in tuning an untrained deep neural network for reconstructing MR images from accelerated data. METHODS: The ConvDecoder (CD) neural network was trained with a physics-based regularization term incorporating the spoiled gradient echo equation that describes variable-flip angle data. Fully-sampled variable-flip angle k-space data were retrospectively accelerated by factors of R = {8, 12, 18, 36} and reconstructed with CD, CD with the proposed regularization (CD + r), locally low-rank (LR) reconstruction, and compressed sensing with L1-wavelet regularization (L1). Final images from CD + r training were evaluated at the "argmin" of the regularization loss; whereas the CD, LR, and L1 reconstructions were chosen optimally based on ground truth data. The performance measures used were the normalized RMS error, the concordance correlation coefficient, and the structural similarity index. RESULTS: The CD + r reconstructions, chosen using the stopping condition, yielded structural similarity indexs that were similar to the CD (p = 0.47) and LR structural similarity indexs (p = 0.95) across R and that were significantly higher than the L1 structural similarity indexs (p = 0.04). The concordance correlation coefficient values for the CD + r T(1) maps across all R and subjects were greater than those corresponding to the L1 (p = 0.15) and LR (p = 0.13) T(1) maps, respectively. For R ≥ 12 (≤4.2 min scan time), L1 and LR T(1) maps exhibit a loss of spatially refined details compared to CD + r. CONCLUSION: The use of an untrained neural network together with a physics-based regularization loss shows promise as a measure for determining the optimal stopping point in training without relying on fully-sampled ground truth data.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。