Multi-scale sparse convolution and point convolution adaptive fusion point cloud semantic segmentation method

多尺度稀疏卷积和点卷积自适应融合点云语义分割方法

阅读:1

Abstract

Semantic segmentation of LIDAR point clouds is essential for autonomous driving. However, current methods often suffer from low segmentation accuracy and feature redundancy. To address these issues, this paper proposes a novel approach based on adaptive fusion of multi-scale sparse convolution and point convolution. First, addressing the drawbacks of redundant feature extraction with existing sparse 3D convolutions, we introduce an asymmetric importance of space locations (IoSL) sparse 3D convolution module. By prioritizing the importance of input feature positions, this module enhances the sparse learning performance of critical feature information. Additionally, it strengthens the extraction capability of intrinsic feature information in both vertical and horizontal directions. Second, to mitigate significant differences between single-type and single-scale features, we propose a multi-scale feature fusion cross-gating module. This module employs gating mechanisms to improve fusion accuracy between different scale receptive fields. It utilizes a cross self-attention mechanism to adapt to the unique propagation features of point features and voxels, enhancing feature fusion performance. Experimental comparisons and ablation studies conducted on the SemanticKITTI and nuScenes datasets validate the generality and effectiveness of the proposed approach. Compared with state-of-the-art methods, our approach significantly improves accuracy and robustness.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。