Abstract
Water-surface perception is critical for autonomous surface vehicle navigation, where reliable tracking of task-relevant objects is essential for safe and robust operation. Referring multi-object tracking (RMOT) provides a flexible tracking paradigm by allowing users to specify objects of interest through natural language. However, existing RMOT benchmarks are mainly designed for ground or satellite scenes and fail to capture the distinctive visual and semantic characteristics of water-surface environments, including strong reflections, severe illumination variations, weak motion constraints, and a high proportion of small objects. To address this gap, we introduce Refer-ASV, the first RMOT dataset tailored for ASV navigation in complex water-surface scenes. Refer-ASV is constructed from real-world ASV videos and features diverse navigation scenes and fine-grained vessel categories. To facilitate systematic evaluation on Refer-ASV, we further propose RAMOT, an end-to-end baseline framework that enhances visual-language alignment throughout the tracking pipeline by improving visual-language alignment and robustness in challenging maritime environments. Experimental results show that RAMOT achieves a HOTA score of 39.97 on Refer-ASV, outperforming existing methods. Additional experiments on Refer-KITTI demonstrate its generalization ability across different scenes.