Abstract
Motion artifacts affect the quantification accuracy of the tumor angiogenic network measurements from clinical contrast-enhanced ultrasound (CEUS) images. Reliable motion correction methods can improve image alignments but suffer from long computation times and large memory demands. This research project aims to reduce the time and memory needed for motion correction of clinical images from patients diagnosed with hepatocellular carcinoma (HCC). First, B-mode ultrasound (US) images were acquired using a clinical scanner from 36 patients and processed using a conventional two-stage motion correction strategy. Two channel input data consisting of static and moving B-mode US images were prepared as the training data (N = 200 for each patient). Transformation functions derived from the conventional method for affine and non-rigid motion corrections were used as labels to train a deep learning model (encoderdecoder network). After model training, the performance was evaluated using a normalized correlation coefficient (CC) between the reference and moving images. Finally, the time needed for applying motion correction using the traditional method was compared to the prediction time from the deep learning model. On average the CC results were increased by 20% when compared to the data contaminated with motion. Importantly, the time needed to predict a single patch was 0.20 ± 0.004 sec instead of the 3.65 ± 0.25 sec, which was needed to perform motion correction in CEUS images using a more conventional method (p = 0.001).