Targetless LiDAR-camera extrinsic calibration via semantic distribution alignment

基于语义分布对齐的无目标激光雷达-相机外参标定

阅读:2

Abstract

INTRODUCTION: LiDAR-camera fusion systems are widely used in robotic localization and perception, where accurate extrinsic calibration is crucial for multi-sensor fusion. During long-term operation, extrinsic parameters can drift due to vibration and other disturbances, while target-based recalibration is inconvenient in the field and targetless approaches often suffer from highly non-convex objectives and limited robustness in challenging outdoor scenes. METHODS: We propose a targetless LiDAR-camera extrinsic calibration method by minimizing a semantic distribution consistency risk on SE(3). We align semantic probability distributions from the two sensing modalities in the image domain and freeze the pixel sampling measure at an anchor pose, so that pixel weighting no longer depends on the current extrinsic estimate and the objective landscape remains stable during optimization. On top of this anchor-fixed measure, we introduce a direction-aware weighting strategy that emphasizes pixels sensitive to yaw perturbations, improving the conditioning of rotation estimation. We further use a globally balanced Jensen-Shannon divergence to mitigate semantic class imbalance and enhance robustness. RESULTS: Experiments on the KITTI Odometry dataset show that the proposed method reliably converges from substantial initial perturbations and yields stable extrinsic estimates. DISCUSSION: The results indicate that the method is promising for maintaining long-term LiDAR-camera calibration in real-world robotic systems.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。