MFMINet: Multimodal fusion and cross-layer interaction network for semantic segmentation of high-resolution remote sensing images

MFMINet:用于高分辨率遥感图像语义分割的多模态融合和跨层交互网络

阅读:1

Abstract

In recent years, semantic segmentation of remote sensing images using deep convolutional neural networks (CNNs) has seen rapid development in fields like urban planning and land cover analysis. However, reliance on a single imaging modality is often hampered by spectral ambiguity, the absence of elevation cues, and geometric confusion, limiting the discrimination of spectrally similar yet distinct categories like roads versus roofs. While multisource data fusion has emerged as a promising solution, effectively leveraging complementary information from multimodal features remains challenging. To address these challenges, we propose a multimodal fusion and multilayer interaction network (MFMINet), a two-way encoder-decoder network. Our model employs a multimodal cross-layer fusion module (MCFM) to integrate high-level semantic information with low-level spatial details, exploring the complementarities between different information modalities. Additionally, we introduce a self-attention module (SAM) to capture long-range spatial dependencies and refine fused features. Additionally, we develop a feature enhancement module (FEM) that intelligently selects between Transformer blocks for narrow channels and CNN blocks for wide channels, followed by point-wise convolution for optimal feature integration. Furthermore, we propose a dual spatial awareness module (DSAM) to mitigate downsampling effects and process global multiscale contextual information. Extensive experiments on ISPRS Vaihingen and Potsdam datasets demonstrate superior performance, with mIoU reaching 89.96% and 88.24%, respectively, validating the effectiveness of our method.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。