Abstract
The unloading port of a scraper conveyor is a critical component in fully mechanized mining operations and is prone to blockages caused by large coal fragments. These blockages primarily result from the limited accuracy and insufficient real-time performance of existing visual perception methods used by crushing robots to identify large coal pieces in complex mining environments. To address this issue, this paper proposes a visual inspection method for coal mine crushing robots based on transfer learning and an adaptive weighted attention mechanism, termed LCDet. First, a lightweight backbone network incorporating grouped convolution is designed to enhance feature representation while significantly reducing model complexity, thereby meeting deployment requirements. Second, an adaptive weighted attention mechanism is introduced to suppress background interference and emphasize regions containing large coal fragments, particularly enhancing blurred edge textures. In addition, a transfer learning-based training strategy is adopted to improve generalization performance and reduce dependence on large-scale training data. The experimental results on the public DsLMF+ dataset demonstrate that LCDet achieves accuracy, recall, mAP50, and mAP50-95 values of 79.3%, 75.1%, 84.5%, and 56.2%, respectively, achieving a favorable balance between detection accuracy and model complexity. On a self-constructed large coal dataset, LCDet attains accuracy, recall, mAP50, and mAP50-95 of 90.4%, 91.3%, 96.5%, and 69.3%, respectively, outperforming the baseline YOLOv8n model. Compared with other detection methods, LCDet exhibits superior performance while maintaining a relatively low parameter count. These results indicate that LCDet enables lightweight and accurate detection of large coal fragments, supporting real-time deployment on crushing robots in fully mechanized mining environments.