Abstract
Ghost imaging (GI) offers a robust framework for remote sensing under degraded visibility conditions. However, atmospheric scattering in phenomena such as fog introduces significant noise and signal attenuation, thereby limiting its efficacy. Inspired by the selective attention mechanisms of biological visual systems, this study introduces a novel deep learning (DL) architecture that embeds a self-attention mechanism to enhance GI reconstruction in foggy environments. The proposed approach mimics neural processes by modeling both local and global dependencies within one-dimensional bucket measurements, enabling superior recovery of image details and structural coherence even at reduced sampling rates. Extensive simulations on the Modified National Institute of Standards and Technology (MNIST) and a custom Human-Horse dataset demonstrate that our bio-inspired model outperforms conventional GI and convolutional neural network-based methods. Specifically, it achieves Peak Signal-to-Noise Ratio (PSNR) values between 24.5-25.5 dB/m and Structural Similarity Index Measure (SSIM) values of approximately 0.8 under high scattering conditions (β ≥ 3.0 dB/m) and moderate sampling ratios (N ≥ 50%). A comparative analysis confirms the critical role of the self-attention module, providing high-quality image reconstruction over baseline techniques. The model also maintains computational efficiency, with inference times under 0.12 s, supporting real-time applications. This work establishes a new benchmark for bio-inspired computational imaging, with significant potential for environmental monitoring, autonomous navigation and defense systems operating in adverse weather.