Abstract
Three-dimensional (3D) reconstruction based on fringe projection profilometry (FPP) is a crucial technique for capturing surface topography in high-precision industrial manufacturing. However, overexposure phenomenon frequently occurs in captured images due to variations in object reflectance and lighting conditions, leading to reduced 3D reconstruction accuracy. This represents the most challenging issue in high dynamic range (HDR) environments. To this end, I propose a deep learning-based fringe image restoration method. It utilizes the derivative networks of U-Net to restore saturated fringes, enabling subsequent 3D reconstruction. This method significantly enhances reconstruction accuracy without requiring additional hardware or capturing multiple extra image sets for prediction. I further systematically compared the performance of three network architectures-U-Net, Res-U-Net, and SE-U-Net-in the fringe repair task, revealing their respective capabilities through quantitative experimental analysis. Comparative experiments show that all three networks in this paper can effectively repair saturated fringes, with SE-U-Net exhibiting superior performance in restoring missing regions. This study not only validates the effectiveness of deep learning for repairing saturated fringe images in HDR scenes, but also provides guidance for selecting network models in grating fringe restoration.