Abstract
INTRODUCTION: To achieve fast detection of pear fruits in natural pear orchards and optimize path planning for harvesting robots, this study proposes the AHG-YOLO model for multi-category detection of pear fruit occlusion in complex orchard environments. METHODS: Using the Red Delicious pear as the research object, the pears are classified into three categories based on different occlusion statuses: non-occluded fruits (NO), fruits occluded by leaves/branches (OBL), and fruits in close contact with other fruits but not obstructed by leaves/branches (FCC). The YOLOv11n model is used as the base model for a lightweight design. First, the sampling method in the backbone and neck networks is replaced with ADown downsampling to capture higher-level image features, reducing floating-point operations and computational complexity. Next, shared weight parameters are introduced in the head network, and group convolution is applied to achieve a lightweight detection head. Finally, the boundary box loss function is changed to Generalized Intersection over Union (GIoU), improving the model's convergence speed and further enhancing detection performance. RESULTS: Experimental results show that the AHG-YOLO model achieves 93.5% (FCC), 95.3% (NO), and 93.4% (OBL) in AP, with an mAP@0.5 of 94.1% across all categories. Compared to the base YOLOv11n network, precision, recall, mAP@0.5, and mAP@0.5:0.95 are improved by 2.5%, 3.6%, 2.3%, and 2.6%, respectively. The model size is only 5.1MB, with a 16.9% reduction in the number of parameters. DISCUSSION: The improved model demonstrates enhanced suitability for deployment on pear-harvesting embedded devices, providing technical support for the path planning of fruit-picking robotic arms.