Abstract
In fields such as intelligent security, pedestrian re identification technology is crucial. However, in actual monitoring scenarios, low resolution images generated due to factors such as shooting distance seriously lead to loss of details and decreased recognition performance. To overcome the technical bottleneck of excessive sharpening and artifacts in traditional super-resolution methods for reconstructing pedestrian images, a super-resolution pedestrian re recognition method based on bidirectional generative adversarial networks is proposed. The core innovation of this method lies in the construction of a bidirectional adversarial network architecture that integrates forward super-resolution reconstruction and backward downsampling simulation. By introducing residual residual dense blocks and optimizing the loss function based on ESRGAN, the realism and naturalness of image reconstruction are significantly improved. The experimental results showed that the proposed method (BSRGAN ReiD) achieved leading performance on multiple public datasets: on the Urban100 dataset, its PSNR reached 34.23 and SSIM reached 0.78; The average precision (mAP) on the DukeMTMC reID and CUHK03 datasets reached 91.4% and 82.7%, respectively. In simulated monitoring scenario testing, the research method achieved a correct recognition rate of 90.2%, with both false positive and false negative rates below 7%. At the same time, it demonstrated lower computational resource consumption and faster response speed. The main contribution of the research is to provide an efficient and robust solution for solving the problem of low resolution pedestrian re identification, which has strong theoretical value and practical application potential.