A Resilient Method for Visual-Inertial Fusion Based on Covariance Tuning.

一种基于协方差调整的视觉惯性融合弹性方法

阅读:3
作者:Li Kailin, Li Jiansheng, Wang Ancheng, Luo Haolong, Li Xueqiang, Yang Zidi
To improve localization and pose precision of visual-inertial simultaneous localization and mapping (viSLAM) in complex scenarios, it is necessary to tune the weights of the visual and inertial inputs during sensor fusion. To this end, we propose a resilient viSLAM algorithm based on covariance tuning. During back-end optimization of the viSLAM process, the unit-weight root-mean-square error (RMSE) of the visual reprojection and IMU preintegration in each optimization is computed to construct a covariance tuning function, producing a new covariance matrix. This is used to perform another round of nonlinear optimization, effectively improving pose and localization precision without closed-loop detection. In the validation experiment, our algorithm outperformed the OKVIS, R-VIO, and VINS-Mono open-source viSLAM frameworks in pose and localization precision on the EuRoc dataset, at all difficulty levels.

特别声明

1、本文转载旨在传播信息,不代表本网站观点,亦不对其内容的真实性承担责任。

2、其他媒体、网站或个人若从本网站转载使用,必须保留本网站注明的“来源”,并自行承担包括版权在内的相关法律责任。

3、如作者不希望本文被转载,或需洽谈转载稿费等事宜,请及时与本网站联系。

4、此外,如需投稿,也可通过邮箱info@biocloudy.com与我们取得联系。