Abstract
Semantic segmentation in adverse weather conditions presents significant challenges due to insufficient image brightness, excessive noise, and blurred object boundaries, which hinder the performance of traditional visual recognition methods. Domain generalization (DG) for semantic segmentation aims to leverage data from normal illumination domains to ensure robust model performance in unseen adverse weather domains-a critical requirement for autonomous driving robots. Recent advancements in parameter-efficient fine-tuning via frozen vision foundation models offer new avenues for DGs. However, conventional domain-generalized semantic segmentation methods often struggle with severe weather conditions, particularly in capturing object details and global structures. To overcome these limitations, we introduce RFGLNet, a domain-generalized semantic segmentation model designed for adverse weather scenarios. RFGLNet enhances segmentation accuracy by incorporating an SVD-Initialized Low-Rank Module, a Fourier-Enhanced Channel Attention Module, and a Grouped Modeling Spatial Attention Module. By leveraging frequency-domain information through Fourier transforms, RFGLNet improves global structural perception, facilitating a holistic understanding of complex scenarios. Additionally, the decompositional modeling spatial attention mechanism reduces cross-channel interference, enhancing local detail extraction. Using singular value decomposition for parameter fine-tuning ensures precise and rapid alignment with pretrained feature distributions. Our experiments show that RFGLNet achieves a mean intersection over union of 78.3% on the ACDC adverse weather test dataset, with only 4.32 M trainable parameters.