Abstract
Discrimination between normal (fresh/non-frozen) and frozen-thawed beef is crucial for ensuring food safety. This paper proposed a novel, non-destructive and real-time you only look once for normal and frozen-thawed beef discrimination (YOLO-NF) model using deep learning techniques. The simple, parameter-free attention module (SimAM) and the squeeze and excitation (SE) attention mechanism were introduced to enhance the model's performance. A total of 1200 beef samples were used, with their images captured by a charge-coupled device (CCD) camera. In the model development, specifically, the training set comprised 3888 images after data augmentation, while the validation set and test set each included 216 original images. Experimental results on the test set showed that the YOLO-NF model achieved precision, recall, F1-Score and mean average precision (mAP) of 95.5%, 95.2%, 95.3% and 98.6%, respectively, significantly outperforming YOLOv7, YOLOv5 and YOLOv8 models. Additionally, gradient-weighted class activation mapping (Grad-CAM) was adopted to interpret the model's decision basis. Moreover, the model was deployed on the web interface for user convenience, and the discrimination time on the local server was 0.94 s per image, satisfying the requirements for real-time processing. This study provides a promising technique for high-performance and rapid meat quality assessment in food safety monitoring systems.