A dynamic attention mechanism for road extraction from high-resolution remote sensing imagery using feature fusion

一种基于特征融合的高分辨率遥感影像道路提取动态注意力机制

阅读:2

Abstract

Accurate road information is critical for intelligent navigation and urban planning. Compared with traditional road detection methods, deep learning-based approaches have demonstrated significant advantages in road extraction from remote sensing imagery. However, challenges such as occlusion by vegetation and buildings, as well as the similarity between roads and surrounding objects, often lead to incomplete road extraction. To address these issues, we propose a novel deep learning model, RISENet, which consists of three main components: a dual-branch fusion encoder, a multi-layer dynamic spatial channel fusion attention mechanism (MCSA), and a hybrid feature dilation-aware decoder. The dual-branch encoder leverages dual convolutions and multi-head deep convolutions to extract fundamental features and capture fine-grained details. The feature fusion module integrates both global and local information, enhancing the model's ability to represent features effectively. The MCSA captures long-range dependencies within remote sensing images, improving the differentiation between roads and other objects. The dilation-aware decoder dynamically expands the receptive field, preserving global features while reducing the loss of fine details. The proposed RISENet was comprehensively evaluated on three distinct road segmentation benchmarks, demonstrating superior accuracies of 90.04%, 92.24%, and 88.18% respectively. In terms of visual quality and quantitative indicators, the method proposed in this study demonstrates excellent performance. The ablation experiments have also confirmed the effectiveness of the adopted loss function and fusion strategy. These fully indicate that RISENet performs remarkably well in road segmentation tasks across various datasets and exhibits considerable robustness.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。