CNN-Based Cross-Modal Residual Network for Image Synthesis

基于卷积神经网络的跨模态残差网络用于图像合成

阅读:2

Abstract

This study attempts to address the issue that present cross-modal image synthesis algorithms do not capture the spatial and structural information of human tissues effectively. As a consequence, the resulting photos include flaws including fuzzy edges and a poor signal-to-noise ratio. The authors offer a cross-sectional technique that combines residual modules with generative adversarial networks. The approach incorporates an enhanced residual initial module and attention mechanism into the generator network, reducing the number of parameters and improving the generator's feature learning capabilities. To boost discriminant performance, the discriminator employs a multiscale discriminator. A multilevel structural similarity loss is included in the loss function to improve picture contrast preservation. On the ADNI data set, the algorithm is compared to the mainstream algorithms. The experimental findings reveal that the synthetic PET image's MAE index has dropped while the SSIM and PSNR indexes have improved. The experimental findings suggest that the proposed model may maintain picture structural information while improving image quality in both visual and objective measures. The residue initial module and attention mechanism are employed to increase the generator's capacity for learning, while the multiscale discriminator is utilized to improve the model's discriminative performance. The enhanced method in this study can maintain the structure and contrast information of the picture, according to comparative experimental findings using the ADNI dataset. The produced picture is hence more aesthetically similar to the genuine print.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。