Abstract
Due to the complex environment of caged chicken coops, uneven illumination and severe occlusion in the coops lead to unsatisfactory accuracy of chicken detection. In this study, we construct an image dataset in the production environment of caged chickens using a head and neck co-annotation method and a multi-stage co-enhancement strategy, and we propose Chicken-YOLO, an occlusion-aware caged chicken detection model based on multi-scale edge information extractor and context fusion for the severe occlusion and poor illumination situations. The model enhances chicken feather texture and crown contour features via the multi-scale edge information extractor (MSEIExtractor), optimizes downsampling information retention through integrated context-guided downsampling (CGDown), and improves occlusion perception using the detection head with the multi-scale separation and enhancement attention module (DHMSEAM). Experiments demonstrate that Chicken-YOLO achieves the best detection performance among mainstream models, exhibiting 1.7% and 1.6% improvements in mAP50 and mAP50:95, respectively, over the baseline model YOLO11n. Moreover, the improved model achieves higher mAP50 than the superior YOLO11s while using only 58.8% of its parameters and 42.3% of its computational cost. On the two specialized test sets-one for poor illumination cases and the other for multiple occlusion cases-Chicken-YOLO's performance improves significantly, with mAP50 increasing by 3.0% and 1.8%, respectively. This suggests that the model enhances target capture capability under poor illumination and maintains better contour continuity in occlusion cases, verifying its robustness against complex disturbances.