Abstract
Underwater Optical Camera Communication (UOCC) has emerged as a promising paradigm for short-range, high-bandwidth, and secure data exchange in autonomous underwater vehicles (AUVs). UOCC performance strongly depends on exposure time and International Standards Organization (ISO) sensitivity-two parameters that govern photon capture, contrast, and bit detection fidelity. However, optical propagation in aquatic environments is highly susceptible to turbidity, scattering, and illumination variability, which severely degrade image clarity and signal-to-noise ratio (SNR). Conventional systems with fixed imaging settings cannot adapt to time-varying conditions, limiting communication reliability. While validating the feasibility of deep learning for exposure prediction, this baseline lacked environmental awareness and generalization to dynamic scenarios. To overcome these limitations, we introduce a Real-to-Sim-to-Deployment framework that couples a physically calibrated emulation platform with a Hybrid CNN-MLP Model (HCMM). By fusing optical images, environmental states, and camera configurations, the HCMM achieves substantially improved parameter prediction accuracy, reducing RMSE to 0.23-0.33. When deployed on embedded hardware, it enables real-time adaptive reconfiguration and delivers up to 8.5 dB SNR gain, surpassing both static-parameter systems and the prior CNN baseline. These results demonstrate that environment-aware multimodal learning, supported by reproducible optical channel emulation, provides a scalable and robust solution for practical UOCC deployment in positioning, inspection, and laser-based underwater communication.