Abstract
The demand for high-quality beams from high-power lasers has led to the need for high-precision inspection of adhesion points for collimating lens packages. In this paper, we propose a Multi-Level Scale Attention Fusion Network (MLSAFNet) by fusing a Multi-Level Attention Module (MLAM) and a Multi-Scale Channel-Guided Module (MSCGM) to achieve highly accurate and robust adhesive spots detection. Additionally, we built a Laser Lens Adhesive Spots (LLAS) dataset using automated lens packaging equipment and performed pixel-by-pixel standardization for the first time. Extensive experimental results show that the mean intersection over union (mIoU) of MLSAFNet reaches 91.15%, and its maximum values of localization error and area measurement error are 21.83 μm and 0.003 mm(2), respectively, which are better than other target detection methods.