Abstract
Tomato early blight, caused by Alternaria solani, poses a significant threat to crop yields. Existing detection methods often struggle to accurately identify small or multi-scale lesions, particularly in early stages when symptoms exhibit low contrast and only subtle differences from healthy tissue. Blurred lesion boundaries and varying degrees of severity further complicate accurate detection. To address these challenges, we present YOLOv11-AIU, a lightweight object detection model built on an enhanced YOLOv11 framework, specifically designed for severity grading of tomato early blight. The model integrates a C3k2_iAFF attention fusion module to strengthen feature representation, an Adown multi-branch downsampling structure to preserve fine-scale lesion features, and a Unified-IoU loss function to enhance bounding box regression accuracy. A six-level annotated dataset was constructed and expanded to 5,000 images through data augmentation. Experimental results demonstrate that YOLOv11-AIU outperforms models such as YOLOv3-tiny, YOLOv8n, and SSD, achieving a mAP@50 of 94.1%, mAP@50-95 of 93.4%, and an inference speed of 15.67 FPS. When deployed on the Luban Cat5 platform, the model achieved real-time performance, highlighting its strong potential for practical, field-based disease detection in precision agriculture and intelligent plant health monitoring.