Improving object detection in challenging weather for autonomous driving via adversarial image translation

通过对抗性图像转换改进复杂天气条件下自动驾驶的目标检测

阅读:1

Abstract

Vision-based environmental perception is fundamental to autonomous driving, as it enables reliable detection and recognition of diverse objects in complex traffic environments. However, adverse weather conditions (such as rain, fog, and low-light conditions) significantly degrade image quality, thereby undermining the reliability of object detection algorithms. To address this challenge, we propose a two-stage framework designed to enhance object detection under adverse conditions. In the first stage, we design a lightweight Pix2Pix-based generative adversarial network (LP-GAN) that translates adverse-weather images into clear-weather counterparts, thereby alleviating visual degradation. In the second stage, the translated images are processed by a state-of-the-art object detector (YOLOv8) to enhance robustness and accuracy. Extensive experiments on the CARLA simulator demonstrate that the proposed framework substantially improves detection performance across diverse adverse conditions. Furthermore, the generated clear-weather images provide faithful and interpretable visual representations, which can facilitate human understanding and decision-making in autonomous driving. Overall, the proposed framework offers a practical and effective solution for weather-robust object detection, thereby contributing to safer and more reliable autonomous driving.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。