Faster motion correction of clinical contrast-enhanced ultrasound imaging using deep learning

利用深度学习加快临床对比增强超声成像的运动校正

阅读:1

Abstract

Motion artifacts affect the quantification accuracy of the tumor angiogenic network measurements from clinical contrast-enhanced ultrasound (CEUS) images. Reliable motion correction methods can improve image alignments but suffer from long computation times and large memory demands. This research project aims to reduce the time and memory needed for motion correction of clinical images from patients diagnosed with hepatocellular carcinoma (HCC). First, B-mode ultrasound (US) images were acquired using a clinical scanner from 36 patients and processed using a conventional two-stage motion correction strategy. Two channel input data consisting of static and moving B-mode US images were prepared as the training data (N = 200 for each patient). Transformation functions derived from the conventional method for affine and non-rigid motion corrections were used as labels to train a deep learning model (encoderdecoder network). After model training, the performance was evaluated using a normalized correlation coefficient (CC) between the reference and moving images. Finally, the time needed for applying motion correction using the traditional method was compared to the prediction time from the deep learning model. On average the CC results were increased by 20% when compared to the data contaminated with motion. Importantly, the time needed to predict a single patch was 0.20 ± 0.004 sec instead of the 3.65 ± 0.25 sec, which was needed to perform motion correction in CEUS images using a more conventional method (p = 0.001).

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。