Abstract
Unmanned aerial vehicle (UAV) remote-sensing images present unique challenges to the object-detection task due to uneven object densities, low resolution, and drastic scale variations. Downsampling is an important component of deep networks that expands the receptive field, reduces computational overhead, and aggregates features. However, object detectors using multi-layer downsampling result in varying degrees of texture feature loss for various scales in remote-sensing images, degrading the performance of multi-scale object detection. To alleviate this problem, we propose a lightweight texture reconstructive downsampling module called TRD. TRD models part of the texture features lost as residual information during downsampling. After modeling, cascading downsampling and upsampling operators provide residual feedback to guide the reconstruction of the desired feature map for each downsampling stage. TRD structurally optimizes the feature-extraction capability of downsampling to provide sufficiently discriminative features for subsequent vision tasks. We replace the downsampling module of the existing backbone network with the TRD module and conduct a large number of experiments and ablation studies on a variety of remote-sensing image datasets. Specifically, the proposed TRD module improves 3.1% AP over the baseline on the NWPU VHR-10 dataset. On the VisDrone-DET dataset, the TRD improves 3.2% AP over the baseline with little additional cost, especially the APS, APM, and APL by 3.1%, 8.8%, and 13.9%, respectively. The results show that TRD enriches the feature information after downsampling and effectively improves the multi-scale object-detection accuracy of UAV remote-sensing images.