Abstract
Accurate detection of apple leaf diseases remains a critical challenge in precision agriculture, where complex field conditions and subtle symptom variations often degrade model performance. This paper introduces a novel hybrid architecture combining enhanced spatial attention with edge-aware feature extraction to improve disease classification robustness. The proposed model integrates a multi-scale feature fusion module to capture both local lesion patterns and global contextual cues, while a lightweight attention mechanism dynamically prioritizes disease-relevant regions. Experiments on a curated dataset of 12,350 apple leaf images demonstrate the effectiveness of proposed approach, achieving 96.7% classification accuracy across six disease categories - a significant improvement over baseline models like EfficientNet-B4 (94.1%) and ResNet-50 (93.8%). The system particularly excels in detecting early-stage infections, showing 15% higher precision for subtle scab lesions compared to existing methods. With only 3.2 million parameters, the model maintains practical deployment potential for edge devices in orchard environments. These advances address key limitations in current vision-based plant disease detection systems while balancing accuracy and computational efficiency for real-world agricultural applications.