Preserving noise texture through training data curation for deep learning denoising of high-resolution cardiac EID-CT

通过训练数据整理保留噪声纹理,以用于高分辨率心脏EID-CT的深度学习去噪

阅读:1

Abstract

BACKGROUND: To utilize high spatial resolution reconstructions for cardiac imaging at energy-integrating detector CT (EID)-CT with comparable noise to similar reconstructions at photon-counting detector (PCD)-CT, methods to control EID-CT image noise are needed. Supervised convolutional neural networks (CNN) have shown promise for denoising, but a challenge remains to efficiently create high-quality and unbiased estimates of noise without access to dedicated software or proprietary information, such that natural noise texture is retained in CNN-denoised CT images. PURPOSE: This study aims to develop and test image-based noise estimation methods that can be used to train a CNN model, and to evaluate denoising performance and noise texture preservation for EID-CT coronary CT angiography (cCTA) images reconstructed with high-resolution kernels. METHODS: U-net CNN models were trained for denoising. To supervise training, noise-only images were estimated directly from high-resolution kernel (Bv59) reconstructed EID-CT (HR EID-CT) patient images using two different methods: subtraction of low- and high-strength iterative reconstruction (IR); subtraction of adjacent image slices with the same IR strength. The noise estimates from these methods contain differing noise texture and anatomical information. Networks were trained and validated separately for three data sets: the training data from each of the two noise-estimation methods, and a 50%-50% partition of training data between the two methods. The trained models were applied to two sets of testing data: CT images of a uniform water phantom to measure noise power spectra (NPS), and an independent cohort of seven patient cCTA HR EID-CT exams. The denoised patient images were compared to standard resolution EID-CT reconstructions (Bv40). As a low-noise reference, patient images acquired on the same day with a PCD-CT and reconstructed using a similar kernel as HR EID-CT were used for comparison. RESULTS: Models trained with each noise-image estimation method denoised the HR EID-CT images by 74%-79% to achieve a comparable noise magnitude to the HR PCD-CT images. The peak, average, and 10% peak frequencies of the NPS of the input images (6.08, 6.24, and 12.0 cm(-1)) were better approximated by the model trained on adjacent slice subtraction (6.56, 5.87, and 11.5 cm(-1)) than by the model trained on subtraction of low- and high-IR images (4.64, 5.44, and 11.3 cm(-1)). In cCTA images, the IR subtraction model images retained anatomic structures from input images but resulted in undesirable salt-and-pepper noise texture and CT number bias. The model trained on adjacent slice subtraction images had more natural texture and no significant bias, but the model sometimes removed small anatomic structures. The model trained on the mixed training dataset preserved both noise texture and anatomy from the model inputs and enabled visualization of small structures seen in PCD-CT images that were previously unresolved by EID-CT. CONCLUSIONS: The noise texture and anatomical accuracy in CT images denoised with an image-based supervised CNN are greatly influenced by the characteristics and partitioning of training data. With higher-resolution reconstructions and noise texture-preserving deep learning denoising, the quality of cCTA images from EID-CT can be enhanced to enable resolvability of subtle anatomy similar to PCD-CT.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。