S2*-ODM: Dual-Stage Improved PointPillar Feature-Based 3D Object Detection Method for Autonomous Driving

S2*-ODM:一种用于自动驾驶的基于双阶段改进点柱特征的三维物体检测方法

阅读:1

Abstract

Three-dimensional (3D) object detection is crucial for autonomous driving, yet current PointPillar feature-based methods face challenges like under-segmentation, overlapping, and false detection, particularly in occluded scenarios. This paper presents a novel dual-stage improved PointPillar feature-based 3D object detection method (S2*-ODM) specifically designed to address these issues. The first innovation is the introduction of a dual-stage pillar feature encoding (S2-PFE) module, which effectively integrates both inter-pillar and intra-pillar relational features. This enhancement significantly improves the recognition of local structures and global distributions, enabling better differentiation of objects in occluded or overlapping environments. As a result, it reduces problems such as under-segmentation and false positives. The second key improvement is the incorporation of an attention mechanism within the backbone network, which refines feature extraction by emphasizing critical features in pseudo-images and suppressing irrelevant ones. This mechanism strengthens the network's ability to focus on essential object details. Experimental results on the KITTI dataset show that the proposed method outperforms the baseline, achieving notable improvements in detection accuracy, with average precision for 3D detection of cars, pedestrians, and cyclists increasing by 1.04%, 2.17%, and 3.72%, respectively. These innovations make S2*-ODM a significant advancement in enhancing the accuracy and reliability of 3D object detection for autonomous driving.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。