Abstract
Monocular visual measurement and vision-guided robotics technology find extensive application in modern automated manufacturing, particularly in aerospace assembly. However, during assembly pose measurement and guidance, the propagation and accumulation of multi-source errors-including those from visual measurement, hand-eye calibration, and robot calibration-impact final assembly accuracy. To address this issue, this study first proposes an uncertainty analysis method for monocular visual measurement systems in assembly pose, encompassing the determination of uncertainty propagation paths and input uncertainty values. Building on this foundation, the system's uncertainty is analyzed. Inspired by the uncertainty analysis results, this study further proposes a direct one-step solution to a series of problems in robot calibration and hand-eye calibration using a nonlinear mapping estimation method. Through experiments and discussion, a high-performance, one-step, end-to-end pose estimation convolutional neural network (OECNN) is constructed. The OECNN achieves direct mapping from the pose variation of the target object to the drive volume variation of the positioner. The uncertainty analysis conducted in this study yields a series of conclusions that are significant for further enhancing the precision of assembly pose estimation. The proposed uncertainty analysis methodology may also serve as a reference for uncertainty analysis in complex systems. Experimental validation demonstrates that the proposed one-step end-to-end pose estimation method exhibits high accuracy. It can be applied to automated assembly tasks involving various vision-guided robots, including those with typical configurations, and it is particularly suitable for high-precision assembly scenarios, such as aircraft assembly.