Abstract
In recent years, semantic segmentation of remote sensing images using deep convolutional neural networks (CNNs) has seen rapid development in fields like urban planning and land cover analysis. However, reliance on a single imaging modality is often hampered by spectral ambiguity, the absence of elevation cues, and geometric confusion, limiting the discrimination of spectrally similar yet distinct categories like roads versus roofs. While multisource data fusion has emerged as a promising solution, effectively leveraging complementary information from multimodal features remains challenging. To address these challenges, we propose a multimodal fusion and multilayer interaction network (MFMINet), a two-way encoder-decoder network. Our model employs a multimodal cross-layer fusion module (MCFM) to integrate high-level semantic information with low-level spatial details, exploring the complementarities between different information modalities. Additionally, we introduce a self-attention module (SAM) to capture long-range spatial dependencies and refine fused features. Additionally, we develop a feature enhancement module (FEM) that intelligently selects between Transformer blocks for narrow channels and CNN blocks for wide channels, followed by point-wise convolution for optimal feature integration. Furthermore, we propose a dual spatial awareness module (DSAM) to mitigate downsampling effects and process global multiscale contextual information. Extensive experiments on ISPRS Vaihingen and Potsdam datasets demonstrate superior performance, with mIoU reaching 89.96% and 88.24%, respectively, validating the effectiveness of our method.