Abstract
Vision-based environmental perception is fundamental to autonomous driving, as it enables reliable detection and recognition of diverse objects in complex traffic environments. However, adverse weather conditions (such as rain, fog, and low-light conditions) significantly degrade image quality, thereby undermining the reliability of object detection algorithms. To address this challenge, we propose a two-stage framework designed to enhance object detection under adverse conditions. In the first stage, we design a lightweight Pix2Pix-based generative adversarial network (LP-GAN) that translates adverse-weather images into clear-weather counterparts, thereby alleviating visual degradation. In the second stage, the translated images are processed by a state-of-the-art object detector (YOLOv8) to enhance robustness and accuracy. Extensive experiments on the CARLA simulator demonstrate that the proposed framework substantially improves detection performance across diverse adverse conditions. Furthermore, the generated clear-weather images provide faithful and interpretable visual representations, which can facilitate human understanding and decision-making in autonomous driving. Overall, the proposed framework offers a practical and effective solution for weather-robust object detection, thereby contributing to safer and more reliable autonomous driving.