Abstract
Underwater images frequently experience degradation, including color shifts, blurred details, and reduced contrast, primarily caused by light scattering and the challenging underwater conditions. The conventional methods based on physical models have proven insufficient for effectively addressing diverse underwater conditions, while deep learning approaches are limited by the quantity and diversity of data, making it challenging to perform well in unknown environments. Furthermore, these methods typically fail to fully exploit the spectral differences between clear and degraded images and do not capture critical information in the frequency domain, limiting further improvements in enhancement performance. In order to tackle these challenges, we introduce PCAFA-Net, a physically guided network designed for enhancing underwater images through adaptive adjustment in multiple color spaces and the use of frequency-spatial attention. Our proposed model is made up of three essential modules: the Adaptive Gradient Simulation Module (AGSM), which models the degradation mechanism of underwater images; the Adaptive Color Range Adjustment Module (ACRAM), which adaptively modifies the histogram distributions across RGB, Lab, and HIS color spaces; and the Frequency-Spatial Strip Attention Module (FSSAM), which fully utilizes both frequency and spatial domain information. Extensive experiments were conducted on three datasets, demonstrating that our proposed method outperforms others in both subjective and objective evaluations.