A vision transformer based CNN for underwater image enhancement ViTClarityNet

基于视觉变换器的卷积神经网络用于水下图像增强 ViTClarityNet

阅读:2

Abstract

Underwater computer vision faces significant challenges from light scattering, absorption, and poor illumination, which severely impact underwater vision tasks. To address these issues, ViT-Clarity, an underwater image enhancement module, is introduced, which integrates vision transformers with a convolutional neural network for superior performance. For comparison, ClarityNet, a transformer-free variant of the architecture, is presented to highlight the transformer's impact. Given the limited availability of paired underwater image datasets (clear and degraded), BlueStyleGAN is proposed as a generative model to create synthetic underwater images from clear in-air images by simulating realistic attenuation effects. BlueStyleGAN is evaluated against existing state-of-the-art synthetic dataset generators in terms of training stability and realism. Vit-ClarityNet is rigorously tested on five datasets representing diverse underwater conditions and compared with recent state-of-the-art methods as well as ClarityNet. Evaluations include qualitative and quantitative metrics such as UCIQM, UCIQE, and the deep learning-based URanker. Additionally, the impact of enhanced images on object detection and SIFT feature matching is assessed, demonstrating the practical benefits of image enhancement for underwater computer vision tasks.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。