Abstract
To address the challenge of feature extraction and reconstruction in weak-texture environments, and to provide data support for environmental perception in mobile robots operating in such environments, a Feature Extraction and Reconstruction in Weak-Texture Environments solution is proposed. The solution enhances environmental features through laser-assisted marking and employs a two-step feature extraction strategy in conjunction with binocular vision. First, an improved SURF algorithm for feature point fast localization method (FLM) based on multi-constraints is proposed to quickly locate the initial positions of feature points. Then, the robust correction method (RCM) for feature points based on light strip grayscale consistency is proposed to calibrate and obtain the precise positions of the feature points. Finally, a sparse 3D (three-dimensional) point cloud is generated through feature matching and reconstruction. At a working distance of 1 m, the spatial modeling achieves an accuracy of ±0.5 mm, a relative error of 2‱, and an effective extraction rate exceeding 97%. While ensuring both efficiency and accuracy, the solution demonstrates strong robustness against interference. It effectively supports robots in performing tasks such as precise positioning, object grasping, and posture adjustment in dynamic, weak-texture environments.