Unpaired image to image translation for source free domain adaptation in semantic segmentation

语义分割中用于无源域自适应的非配对图像到图像转换

阅读:1

Abstract

Source-free domain adaptation (SFDA) assumes that source data are inaccessible during domain adaptation. Current SFDA methods commonly utilize source-trained models to generate pseudolabels for unlabelled target data. SFDA for semantic segmentation has become topical and focuses on challenges such as pseudolabel noise, model overfitting, and class imbalance. To address these issues, this paper proposes an unpaired image-to-image (UITI) learning framework. Specifically, we select valid pseudolabels on the basis of image-style consistency via two source-trained discriminators, to reduce pseudolabel noise caused by domain discrepancies. To prevent the source model from overfitting on the target domain, we generate augmented data as supplementary samples for the target data. These synthetic samples maintain feature-level knowledge of source data while preserving domain-invariant structural characteristics of target data. Furthermore, these synthetic samples foster rare-class patches and key-region patches. Additionally, we propose a class alignment loss to balance the appearance frequency of classes, and a region alignment loss to preserve both global semantics and local details. Extensive experiments on two widely used benchmarks, GTA5 → Cityscapes and SYNTHIA → Cityscapes, show that the proposed method achieves state-of-the-art mIoU scores of 58.3% and 61.3%, respectively.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。