Abstract
INTRODUCTION: Soybean diseases pose a significant threat to global crop yield and food security, necessitating rapid and accurate identification for effective management. While deep learning offers promising solutions for plant disease recognition, existing models often struggle with the complexities of in-field soybean disease identification, particularly due to high intra-class variations and subtle inter-class differences. METHODS: To address these challenges, we propose a novel region-specific feature decoupling and adaptive fusion network (RFDAF-Net) designed for robust and precise soybean disease recognition under real-world field conditions. The core of RFDAF-Net consists of two key components: a region-specific feature decoupling (RFD) module that enhances discriminative patterns and suppresses redundant information through a dual-pathway design, explicitly separating shallow, intermediate, and deep features; and a region-specific feature adaptive fusion (RFAF) module that dynamically integrates these multi-scale features via learned spatial attention. This hierarchical feature decomposition effectively isolates discriminative disease signatures while suppressing irrelevant variations. The architecture is flexible, enabling seamless integration with various backbone networks including both convolutional neural networks and Transformers. RESULTS: We evaluate RFDAF-Net extensively on a comprehensive soybean disease dataset containing images captured in diverse field environments. Experimental results show that our method significantly outperforms current state-of-the-art models across multiple architectures, achieving a top accuracy of 99.43% when implemented with a Swin-B backbone. DISCUSSION: The proposed framework offers an interpretable and field-ready solution for precision crop protection, demonstrating strong generalization ability and practical utility for real-world agricultural applications.