Abstract
Due to the limited and fixed field of view of the onboard camera, the guiding beacons gradually drift out of sight as the AUV approaches the docking station, resulting in unreliable positioning and intermittent data. This paper proposes an underwater autonomous docking visual localization method based on a cage-type dual-layer guiding light array. To address the gradual loss of beacon visibility during AUV approach, a rationally designed localization scheme employing a cage-type, dual-layer guiding light array is presented. A dual-layer light array localization algorithm is introduced to accommodate varying beacon appearances at different docking stages by dynamically distinguishing between front and rear guiding light arrays. Following layer-wise separation of guiding lights, a robust tag-matching framework is constructed for each layer. Particle swarm optimization (PSO) is employed for high-precision initial tag matching, and a filtering strategy based on distance and angular ratio consistency eliminates unreliable matches. Under extreme conditions with three missing lights or two spurious beacons, the method achieves 90.3% and 99.6% matching success rates, respectively. After applying filtering strategy, error correction using backtracking extended Kalman filter (BTEKF) brings matching success rate to 99.9%. Simulations and underwater experiments demonstrate stable and robust tag matching across all docking phases, with average detection time of 0.112 s, even when handling dual-layer arrays. The proposed method achieves continuous visual guidance-based docking for autonomous AUV recovery.