Abstract
To obtain accurate location information in dynamic environments, we propose a dynamic visual-inertial SLAM algorithm that can operate in real-time. In this paper, we combine the YOLO-V5 algorithm and the depth threshold extraction algorithm to achieve real-time pixel-level segmentation of objects. Meanwhile, to address the situation where dynamic targets are occluded by other objects, we design the object depth extraction method based on K-means clustering. We also design a factor graph optimization with rigid and non-rigid dynamic objects based on object category division, in order to better utilize the motion information of dynamic objects. We use the Kalman filter algorithm to achieve object matching and tracking. At the same time, to obtain as many rigid targets as possible, we design the adaptive rigid point set modeling algorithm to further supplement the rigid objects. Finally, we evaluate the algorithm through public datasets and self-built datasets, verifying its ability to handle dynamic environments.