Abstract
Environmental perception systems provide foundational geospatial intelligence for precision mapping applications. Light Detection and Ranging (LiDAR) provides critical 3D point cloud data for environmental perception systems, yet efficiently processing unstructured point clouds while extracting semantically meaningful information remains a persistent challenge. This paper presents FARVNet, a novel real-time Range-View (RV)-based semantic segmentation framework that explicitly models the intrinsic correlation between intensity features and spatial coordinates to enhance feature representation in point cloud analysis. Our architecture introduces three key innovations: First, the Geometric Field of View Reconstruction (GFVR) module rectifies spatial distortions and compensates for structural degradation induced during the spherical projection of 3D LiDAR point clouds onto 2D range images. Second, the Intensity Reconstruction (IR) module is employed to update the "Intensity Vanishing State" for zero-intensity points, including those from LiDAR acquisition limitations, thus enhancing the learning ability and robustness of the network. Third, the Adaptive Multi-Scale Feature Fusion (AMSFF) is applied to balance high-frequency and low-frequency features, augmenting the model expressiveness and generalization ability. Experimental evaluations demonstrate that FARVNet achieves state-of-the-art performance in single-sensor real-time segmentation tasks while maintaining computational efficiency suitable for environmental perception systems. Our method ensures high performance while balancing real-time capability, making it highly promising for LiDAR-based real-time applications.