Soft smooth contrastive learning with hybrid memory for unsupervised visible-infrared person re-identification

基于混合记忆的柔和对比学习用于无监督可见光-红外行人重识别

阅读:1

Abstract

Unsupervised visible-infrared person re-identification (USL-VI-ReID) aims to match person images across visible and infrared modalities without any labeled data, which is severely hindered by the large cross-modality discrepancy and the absence of ground-truth annotations. Recent advances predominantly adopt unsupervised contrastive learning frameworks that rely on clustering-generated pseudo-labels to guide representation learning. While existing methods emphasize establishing cross-modality correspondences for modality-invariant feature learning, they often overlook the adverse impact of unreliable pseudo-labels, which frequently arise from significant intra-class variations and inter-modality misalignment. Such noisy correspondences can severely degrade model robustness and generalization. To tackle this challenge, we propose Soft smooth Contrastive Learning with Hybrid Memory (SCLHM), a novel framework that jointly addresses noise pseudo-labels and cross-modality divergence. Specifically, we first design a Soft Smooth Contrastive Learning (SSCL) module that mitigates the influence of noise pseudo-labels by smoothing similarity distributions based on intra-class consistency. In addition, we introduce a Hybrid Memory Learning (HML) module that unifies modality-specific and modality-invariant feature representations, enabling more comprehensive knowledge integration. Furthermore, an Adaptive-weight Memory Update (AMU) strategy is developed to dynamically adjust memory bank updates during batch training, promoting the learning of globally discriminative and stable features. Code will be released.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。