Abstract
Simultaneous Localization and Mapping (SLAM) is crucial for the safe navigation of autonomous systems. Its accuracy is not based solely on the robustness of the algorithm employed or the metrological performances of the sensor, but rather on a combination of both factors. In this work, we present a comprehensive comparison framework for evaluating LiDAR-SLAM systems, focusing on key performance indicators including absolute trajectory error, uncertainty, number of tracked features, and computational time. Our case study compares two LiDAR-inertial SLAM configurations: one based on a motorized optomechanical scanner (the Ouster OS1) with a 360° field of view and the other based on MEMS scanners (the Livox Horizon) with a limited field of view and a non-repetitive scanning pattern. The sensors were mounted on a UGV for the experiments, where data were collected by driving the UGV along a predefined path at different speeds and angles. Despite substantial differences in field of view, detection range, and noise, both systems demonstrated comparable trajectory estimation performance, with average absolute trajectory errors of 0.25 m for the Livox-based system and 0.24 m for the Ouster-based system. These findings underscore the importance of sensor-algorithm co-design and demonstrate that even cost-effective, lower-field-of-view solutions can deliver competitive SLAM performance in real-world conditions.