Real-time 3D MR guided radiation therapy through orthogonal MR imaging and manifold learning

基于正交磁共振成像和流形学习的实时三维磁共振引导放射治疗

阅读:1

Abstract

BACKGROUND: In magnetic resonance image (MRI)-guided radiotherapy (MRgRT), 2D rapid imaging is commonly used to track moving targets with high temporal frequency to minimize gating latency. However, anatomical motion is not constrained to 2D, and a portion of the target may be missed during treatment if 3D motion is not evaluated. While some MRgRT systems attempt to capture 3D motion by sequentially tracking motion in 2D orthogonal imaging planes, this approach assesses 3D motion via independent 2D measurements at alternating instances, lacking a simultaneous 3D motion assessment in both imaging planes. PURPOSE: We hypothesized that a motion model could be derived from prior 2D orthogonal imaging to estimate 3D motion in both planes simultaneously. We present a manifold learning technique to estimate 3D motion from 2D orthogonal imaging. METHODS: Five healthy volunteers were scanned under an IRB-approved protocol using a 3.0 T Siemens Skyra simulator. Images of the liver dome were acquired during free breathing (FB) with a 2.6 mm × 2.6 mm in-plane resolution for approximately 10 min in alternating sagittal and coronal planes at ∼5 frames per second. The motion model was derived using a combined manifold learning and alignment approach based on locally linear embedding (LLE). The model utilized the spatially overlapping MRI signal shared by both imaging planes to group together images that had similar signals, enabling motion estimation in both planes simultaneously. The model's motion estimates were compared to the ground truth motion derived in each newly acquired image using deformable registration. A simulated target was defined on the dome of the liver and used to evaluate model performance. The Dice similarity coefficient and distance between the model-tracked and image-tracked contour centroids were evaluated. Motion modeling error was estimated in the orthogonal plane by back-propagating the motion to the currently imaged plane and by interpolating the motion between image acquisitions where ground truth motion was available. RESULTS: The motion observed in the healthy volunteer studies ranged from 12.6 to 38.7 mm. On average, the model demonstrated sub-millimeter precision and > 0.95 Dice coefficient compared to the ground truth motion observed in the currently imaged plane. The average Dice coefficient and centroid distance between the model-tracked and ground truth target contours were 0.96 ± 0.03 and 0.26 mm ± 0.27 mm respectively across all volunteer studies. The out-of-plane centroid motion error was estimated to be 0.85 mm ± 1.07 mm and 1.26 mm ± 1.38 mm using the back-propagation (BP) and interpolation error estimation methods. CONCLUSIONS: The healthy volunteer studies indicate promising results using the proposed motion modeling technique. Out-of-plane modeling error was estimated to be higher but still demonstrated sub-voxel motion accuracy.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。