Scale aware dense dynamic SLAM for monocular, stereo and RGBD cameras

适用于单目、双目和RGBD相机的尺度感知密集动态SLAM

阅读:2

Abstract

Accurate real-time dense mapping with precise scale and localization is crucial for autonomous robot navigation, particularly in dynamic environments. However, existing methods often rely on a single sensor type and lack robustness to dynamic scenes. To address these challenges, a generalized framework for real-time, scale-aware dense mapping with dynamic robustness is proposed in this paper, referred to as SDMFusion. SDMFusion supports monocular, stereo, and RGB-D cameras and is built upon the ORB-SLAM3 system. Three core modules are integrated into SDMFusion. The scale-depth optimization module recovers the absolute scale for monocular and refines the depth maps. The dynamic feature rejection module segments dynamic objects, combining geometric constraints and moving consistency checks to facilitate dynamic feature rejection. The real-time anti-dynamic reconstruction module generates high-quality dense maps of static regions using optimized depth, dynamic masks, and camera poses. Extensive experiments on KITTI, TUM RGB-D, BONN RGB-D, and real-world datasets validate the effectiveness of our approach. The results demonstrate that SDMFusion achieves superior overall performance in accuracy and robustness compared to ORB-SLAM3 and other advanced dynamic SLAM methods. Furthermore, our method effectively eliminates dynamic regions from the dense maps.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。