FARVNet: A Fast and Accurate Range-View-Based Method for Semantic Segmentation of Point Clouds

FARVNet:一种快速、精确的基于距离视图的点云语义分割方法

阅读:1

Abstract

Environmental perception systems provide foundational geospatial intelligence for precision mapping applications. Light Detection and Ranging (LiDAR) provides critical 3D point cloud data for environmental perception systems, yet efficiently processing unstructured point clouds while extracting semantically meaningful information remains a persistent challenge. This paper presents FARVNet, a novel real-time Range-View (RV)-based semantic segmentation framework that explicitly models the intrinsic correlation between intensity features and spatial coordinates to enhance feature representation in point cloud analysis. Our architecture introduces three key innovations: First, the Geometric Field of View Reconstruction (GFVR) module rectifies spatial distortions and compensates for structural degradation induced during the spherical projection of 3D LiDAR point clouds onto 2D range images. Second, the Intensity Reconstruction (IR) module is employed to update the "Intensity Vanishing State" for zero-intensity points, including those from LiDAR acquisition limitations, thus enhancing the learning ability and robustness of the network. Third, the Adaptive Multi-Scale Feature Fusion (AMSFF) is applied to balance high-frequency and low-frequency features, augmenting the model expressiveness and generalization ability. Experimental evaluations demonstrate that FARVNet achieves state-of-the-art performance in single-sensor real-time segmentation tasks while maintaining computational efficiency suitable for environmental perception systems. Our method ensures high performance while balancing real-time capability, making it highly promising for LiDAR-based real-time applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。