Abstract
Person re-identification (re-ID) under domain shift remains brittle when conditions vary in viewpoint, illumination, and scene context. A domain-generalized (DG) feature learning framework is proposed that couples training-time domain-aware regularization with inference-time neighborhood calibration. During training, a refined neuron dropout (RND) extends domain-guided dropout via per-domain neuron-impact scores and temperature-controlled retention, suppressing domain-irrelevant activations while preserving domain-salient ones. At inference, a recursive reciprocal-expansion re-ranking (RRE) enforces reciprocal-neighborhood consistency to stabilize similarity estimates on large galleries. The components are architecture-neutral: RND operates on intermediate activations and RRE on distance matrices. The results are reported using a single compact CNN encoder to maintain a fixed computing budget and a fair comparison with lightweight baselines. Comprehensive ablations show stepwise CMC Rank-1 gains from baseline to + DGD, +RND, and + RRE. Comparisons against recent DG-ReID methods indicate competitive performance under pronounced variability in viewpoint, illumination, and background, with small gaps on certain normalization-centric settings, while improving cross-domain robustness on heterogeneous benchmarks.