Dual contrastive learning for synthesizing unpaired fundus fluorescein angiography from retinal fundus images

利用双对比学习技术从视网膜眼底图像中合成非配对眼底荧光血管造影图像

阅读:1

Abstract

BACKGROUND: Fundus fluorescein angiography (FFA) is an imaging method used to assess retinal vascular structures by injecting exogenous dye. FFA images provide complementary information to that provided by the widely used color fundus (CF) images. However, the injected dye can cause some adverse side effects, and the method is not suitable for all patients. METHODS: To meet the demand for high-quality FFA images in the diagnosis of retinopathy without side effects to patients, this study proposed an unsupervised image synthesis framework based on dual contrastive learning that can synthesize FFA images from unpaired CF images by inferring the effective mappings and avoid the shortcoming of generating blurred pathological features caused by cycle-consistency in conventional approaches. By adding class activation mapping (CAM) to the adaptive layer-instance normalization (AdaLIN) function, the generated images are made more realistic. Additionally, the use of CAM improves the discriminative ability of the model. Further, the Coordinate Attention Block was used for better feature extraction, and it was compared with other attention mechanisms to demonstrate its effectiveness. The synthesized images were quantified by the Fréchet inception distance (FID), kernel inception distance (KID), and learned perceptual image patch similarity (LPIPS). RESULTS: The extensive experimental results showed the proposed approach achieved the best results with the lowest overall average FID of 50.490, the lowest overall average KID of 0.01529, and the lowest overall average LPIPS of 0.245 among all the approaches. CONCLUSIONS: When compared with several popular image synthesis approaches, our approach not only produced higher-quality FFA images with clearer vascular structures and pathological features, but also achieved the best FID, KID, and LPIPS scores in the quantitative evaluation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。