An in-silico simulation study to generate computed tomography images from ultrasound data by using deep learning techniques

利用深度学习技术,通过计算机模拟研究从超声数据生成计算机断层扫描图像。

阅读:1

Abstract

OBJECTIVES: Ultrasound has low sensitivity in parenchymal lesion detection compared to contrast-enhanced computed tomography (CT). In this proof-of-concept-study, we investigate whether raw ultrasound data can be used to generate CT-like images using deep learning models in order to enhance lesion detection. METHODS: The k-wave ultrasound and Astra CT simulation toolkits were used to generate 2 datasets (1000 samples each) from simulated phantoms with up to 3 inclusions. The pix2pix conditional Generative adversarial network (cGAN) was trained on 800 samples per dataset, reserving the remainder for testing. Outputs were evaluated using generalized contrast-to-noise ratio (gCNR) and Structural Similarity Index (SSIM). Segmentation of B-mode alone versus B-mode with model-generated CT overlay was performed by 2 radiologists (1 board-certified and 1 resident) and both their performance and inter-observer agreement were evaluated using the Jaccard Index. RESULTS: Model-generated CT-like images exhibited significantly improved gCNR ( 0.574 ± 0.210 to 0.873 ± 0.163 ) and SSIM ( 0.808 ± 0.092 to 0.912 ± 0.062 ) depending on phantom and inclusion type. Computed tomography-like images sometimes highlighted lesions otherwise undetectable in B-mode. The Jaccard index for 100 test samples improved significantly when segmenting Machine-Learning-augmented B-Mode compared with B-Mode images alone ( 0.58 ± 0.18 to 0.69 ± 0.16 ), depending on dataset and radiologist. Inter-observer agreement also improved significantly ( 0.74 ± 0.18 to 0.85 ± 0.07 for 1 dataset). CONCLUSIONS: Deep learning models can effectively translate ultrasound data into CT-like images, improving quality and inter-observer agreement, and enhancing lesion detectability, for example, by alleviating shadowing artefacts. ADVANCES IN KNOWLEDGE: Generating CT-like images using raw ultrasound RF data as input to a cGAN model results in a significant improvement in lesion detectability by, for example, alleviating acoustic shadowing. With a cGAN architecture, even relatively small datasets can successfully generate CT-like images that improve lesion detectability.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。