Cross-Domain Person Re-Identification Based on Multi-Branch Pose-Guided Occlusion Generation

基于多分支姿态引导遮挡生成的跨域行人重识别

阅读:2

Abstract

Aiming at the problems caused by a lack of feature matching due to occlusion and fixed model parameters in cross-domain person re-identification, a method based on multi-branch pose-guided occlusion generation is proposed. This method can effectively improve the accuracy of person matching and enable identity matching even when pedestrian features are misaligned. Firstly, a novel pose-guided occlusion generation module is designed to enhance the model's ability to extract discriminative features from non-occluded areas. Occlusion data are generated to simulate occluded person images. This improves the model's learning ability and addresses the issue of misidentifying occlusion samples. Secondly, a multi-branch feature fusion structure is constructed. By fusing different feature information from the global and occlusion branches, the diversity of features is enriched. This enrichment improves the model's generalization. Finally, a dynamic convolution kernel is constructed to calculate the similarity between images. This approach achieves effective point-to-point matching and resolves the problem of fixed model parameters. Experimental results indicate that, compared to current mainstream algorithms, this method shows significant advantages in the first hit rate (Rank-1), mean average precision (mAP), and generalization performance. In the MSMT17→DukeMTMC-reID dataset, after re-ranking (Rerank) and time-tift (Tlift) for the two indicators on Market1501, the mAP and Rank-1 reached 80.5%, 84.3%, 81.9%, and 93.1%. Additionally, the algorithm achieved 51.6% and 41.3% on DukeMTMC-reID→Occluded-Duke, demonstrating good recognition performance on the occlusion dataset.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。