Enhanced semantic segmentation in remote sensing images with SAR-optical image fusion (IF) and image translation (IT)

利用SAR-光学图像融合(IF)和图像转换(IT)增强遥感图像的语义分割

阅读:1

Abstract

In general, high-fidelity remote sensing requires both synthetic aperture radar (SAR) images, which are available all-day all-weather but could be challenging to interpret, and optical images, which are human-interpretable but are only available in favorable light conditions. Two of the most widely-adopted strategies for combining the complementary information regarding the area of interest revealed in SAR and electro-optical (EO) images are Image Fusion (IF) and Image Translation (IT). IF aims to merge two or more multimodal images into one image, while IT emphasizes on translating the data representations from the images in the source domain to the target domain. Existing methods typically focus on either IF or IT. In this paper, we jointly exploit IF and IT for enhanced semantic segmentation. When the EO image is of high quality, SAR-optical IF is carried out based on NonSubsampled Contourlet Transform and intensity-hue-saturation. When the EO images suffer from heavy noise due to fog/smoke/clouds and SAR images become the last resort, an efficient end-to-end SAR-to-optical IT network based on the diffusion model is adopted. Experimental results show that the proposed DeepLab+IFIT strategy offers an average accuracy (aAcc) of 94.86% and a mean intersection-over-union (mIoU) of 87.11% on the SpaceNet6 dataset, while achieving an aAcc of 95.96% and a mIoU of 80.49% on AIR-MD-SAR-Map dataset, which outperforms several classic semantic segmentation networks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。