An effective transformer based on dual attention fusion for underwater image enhancement

一种基于双注意力融合的水下图像增强有效Transformer

阅读:1

Abstract

Underwater images suffer from color shift, low contrast, and blurred details as a result of the absorption and scattering of light in the water. Degraded quality images can significantly interfere with underwater vision tasks. The existing data-driven based underwater image enhancement methods fail to sufficiently consider the impact related to the inconsistent attenuation of spatial areas and the degradation of color channel information. In addition, the dataset used for model training is small in scale and monotonous in the scene. Therefore, our approach solves the problem from two aspects of the network architecture design and the training dataset. We proposed a fusion attention block that integrate the non-local modeling ability of the Swin Transformer block into the local modeling ability of the residual convolution layer. Importantly, it can adaptively fuse non-local and local features carrying channel attention. Moreover, we synthesize underwater images with multiple water body types and different degradations using the underwater imaging model and adjusting the degradation parameters. There are also perceptual loss functions introduced to improve image vision. Experiments on synthetic and real-world underwater images have shown that our method is superior. Thus, our network is suitable for practical applications.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。