Physics-informed optimization of saturation-transfer MRI protocols using non-differentiable Bloch models

基于物理原理的饱和转移磁共振成像方案优化:利用不可微布洛赫模型

阅读:5

Abstract

Saturation transfer MR fingerprinting (ST-MRF) is a quantitative molecular MRI method that simultaneously estimates parameters of free water, solute, and semisolid macromolecule protons. The accuracy of these quantification is highly dependent on the choice of acquisition parameters, and thus, the optimization of the data acquisition schedule is crucial to improve acquisition efficiency and quantification accuracy. Herein, we developed a learning-based optimization framework for ST-MRF, incorporating a deep Bloch equation simulator as a surrogate model for the forward Bloch equation solver to enable rapid simulations. Notably, the deep Bloch equation simulator overcomes the non-differentiability of the original model by enabling gradient computation during backpropagation within the physics-informed optimization framework, thereby allowing iterative updates of the acquisition schedule to minimize quantification error. In addition, the proposed method estimated an accurate ΔB(0)map with the inclusion of a minimal number of scans to address B(0)inhomogeneity. B(1)inhomogeneity was corrected by providing a relativeB(1)map as an input to the quantification network. We validated our approach using Bloch-McConnell equation-based digital phantoms and further evaluated the performance of the proposed optimized ST-MRF framework inin vivoexperiments. Our results showed that the optimal ST-MRF schedule outperformed other data acquisition schedules with regard to quantification accuracy. In addition, we enhanced thein vivoquantitative maps by correcting motion artifacts and suppressing noise using self-supervised learning techniques. The optimal ST-MRF approach could generate accurate and reliable multi-tissue parameter maps within a clinically acceptable time.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。