Abstract
Salient object ranking aims to assign a relative importance order to multiple objects in an image, aligning with human visual attention. However, existing methods struggle with ranking ambiguity in complex scenes, particularly when objects are numerous, occluded, or semantically similar, leading to decreased accuracy for low-saliency objects. To address this, we propose PairwiseSOR-MLMs, a novel framework leveraging multimodal large models and pairwise comparison to achieve salient object ranking. The approach decomposes global ranking into a series of pairwise comparison tasks. It first employs object detection and instance segmentation to identify objects, uses image inpainting to reconstruct scenes by removing occlusions, and then prompts MLMs to perform pairwise comparisons based on visual saliency cues. Finally, another MLM inference aggregates these comparisons into a consistent global ranking. Experiments on ASSR and IRSR benchmarks show our method achieves state-of-the-art or competitive performance across metrics, demonstrating robustness in handling occlusion and semantic similarity. Its pairwise comparison paradigm can extend to other relative assessment tasks.