A hybrid population-based and patient-specific framework for 2D-3D deformable registration-driven limited-angle cone-beam CT estimation

一种基于人群和患者特异性的混合框架,用于二维-三维可变形配准驱动的有限角度锥束CT估计

阅读:1

Abstract

BACKGROUND: Limited-angle cone-beam CT (LA-CBCT) reduces imaging time and dose but suffers from severe under-sampling artifacts and distortions. 2D-3D deformable registration mitigates this issue by estimating LA-CBCTs through the deformation of a prior, fully-sampled CT/CBCT, using deformation-vector-fields (DVFs) optimized by limited-angle cone-beam projections. Population-trained 2D-3D registration networks enable fast inference but face accuracy challenges, particularly under varying limited-angle scan directions. On the other hand, patient-specific models are more adaptable but typically require considerable runtimes to optimize model parameters from scratch for each case. PURPOSE: To improve the accuracy and efficiency of 2D-3D registration-driven LA-CBCT estimation, a hybrid 2D-3D deformable registration framework was proposed. METHODS: The hybrid population-based and patient-specific 2D-3D deformable registration framework (HB-2D3DReg) synergized the advantages of both population-based and patient-specific approaches while mitigating their limitations. It integrated the fast inference of population-trained models with the test-time adaptability of patient-specific models through a two-stage approach. First, a population-based 2D-3D registration network, 2D3D-RegNet, was trained on a cohort dataset in an unsupervised manner, with a similarity loss defined between digitally reconstructed radiographs (DRRs) of the estimated LA-CBCTs and limited-angle 2D projections. Then, a 2D-3D registration network based on implicit neural representation (INR), 2D3D-INR, refined the DVFs solved by the population-based model during test time for each independent testing case. The population-based 2D3D-RegNet accelerated the optimization of the patient-specific 2D3D-INR and reduced the latter's chance of being trapped at a local optimum, while the patient-specific network, in turn, enhanced the accuracy of the population-based model. HB-2D3DReg was evaluated using a dataset of 48 4D-CTs, 26 of which were used to train the population-based model and 22 for testing. Different limited-angle scan scenarios, featuring varying scan directions and angles, were assessed. RESULTS: HB-2D3DReg attained superior LA-CBCT estimation and registration accuracy. Under an orthogonal-view 90° scan (45° each) with varying scan directions, HB-2D3DReg achieved mean (± S.D.) image relative error of 7.99 ± 2.16% and target registration error of 3.70 ± 1.94 mm, compared to 15.40 ± 2.41% and 8.52 ± 3.31 mm (no registration), 9.82 ± 2.12% and 6.38 ± 2.46 mm (2D3D-RegNet only), and 9.71 ± 2.33% and 5.01 ± 2.77 mm (2D3D-INR only) on the DIR-lab dataset. HB-2D3DReg took ∼3 min to optimize at test time, compared to 13 min for the 2D3D-INR method. CONCLUSION: HB-2D3DReg achieved accurate and robust 2D-3D deformation registration for LA-CBCT estimation, enabling efficient anatomy monitoring to guide radiotherapy. The code will be released at: https://github.com/sanny1226/HB-2D3DReg.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。