Image prediction algorithm for foggy road scenes based on improved transformer

基于改进Transformer的雾天道路场景图像预测算法

阅读:1

Abstract

In severe foggy weather, the visibility of the driving environment is extremely low. This seriously affects the driver's vision and safety. To address the challenges of manual driving in severe foggy weather, this paper proposes a foggy image prediction algorithm for road scenes based on Transformer. The aim is to enhance the visual perception and prediction capabilities of autonomous driving systems under adverse weather conditions. Leveraging the long-range dependency modeling capability of Transformer. We adopt a Transformer improved by Taylor-expanded multi-head self-attention. The Taylor series expansion of the softmax function significantly reduces computational costs. Additionally, a multi-branch architecture with multi-scale patch embedding is introduced in the Transformer. This embeds features through overlapping deformable convolutions of different scales. These improvements enable our algorithm to achieve good image prediction results with relatively low computational performance requirements. The performance of the proposed method was tested on three sets of custom haze road scene image datasets, with the experimental results showing a PSNR of 12.9836 and an SSIM of 0.6278. The experimental results indicate that our method can effectively predict real images under hazy weather, improving visibility in haze conditions. This addresses the serious driving safety issues during heavy haze and contributes to the development of autonomous driving technology.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。