Abstract
Non-cooperative spatial target detection plays a vital role in enabling autonomous on-orbit servicing and maintaining space situational awareness (SSA). However, due to the limited computational resources of onboard embedded systems and the complexity of spaceborne imaging environments, where spacecraft images often contain small targets that are easily obscured by background noise and characterized by low local information entropy, many existing object detection frameworks struggle to achieve high accuracy with low computational cost. To address this challenge, we propose YOLO-GRBI, an enhanced detection network designed to balance accuracy and efficiency. A reparameterized ELAN backbone is adopted to improve feature reuse and facilitate gradient propagation. The BiFormer and C2f-iAFF modules are introduced to enhance attention to salient targets, reducing false positives and false negatives. GSConv and VoV-GSCSP modules are integrated into the neck to reduce convolution operations and computational redundancy while preserving information entropy. YOLO-GRBI employs the focal loss for classification and confidence prediction to address class imbalance. Experiments on a self-constructed spacecraft dataset show that YOLO-GRBI outperforms the baseline YOLOv8n, achieving a 4.9% increase in mAP@0.5 and a 6.0% boost in mAP@0.5:0.95, while further reducing model complexity and inference latency.