Abstract
Accurate identification of wheat varieties is essential for seed certification and precision agriculture, yet traditional visual inspection is subjective, labor-intensive, and often unreliable due to the morphological similarity among cultivars. This study presents a comprehensive comparative framework for automated wheat varietal classification using both handcrafted and deep-learning-based feature extraction methods. A controlled imaging system was used to capture seed images from six Iranian wheat cultivars. Handcrafted morphological, color, and texture descriptors were extracted and reduced using principal component analysis (PCA) prior to classification using a multi-layer perceptron (MLP). In parallel, convolutional neural networks (CNNs) were trained to learn deep features directly from raw images, and two classifier-head strategies-global average pooling (GAP) and fully connected layers (FCL)-were systematically compared. Hyperparameters were optimized through structured experimentation, and model stability was assessed using repeated training runs, one-way ANOVA, and 95% confidence intervals. Results show that the CNN-GAP model achieved the highest accuracy (92.19%) and demonstrated superior generalization stability compared with EfficientNet-B4 and Inception-ResNet-v2 models. PCA-based dimensionality reduction enhanced MLP performance, yielding 86.0% accuracy. Cross-domain testing on chickpea seeds highlighting sensitivity to domain shifts and emphasizing the need for species-specific training data. Practical considerations revealed that the lightweight CNN-GAP architecture, with an average inference time of 13.6 ms per image, is suitable for real-time deployment on low-cost agricultural hardware.