Abstract
Unsupervised domain adaptation (UDA) plays a vital role in machine learning to tackle the homogeneous data distribution scenario. While most previous studies have concentrated on between-domain transferability, they often neglect the rich within-domain semantic structures, thus empirically narrowing the discriminative performance of the learning model. In this work, we introduce a novel approach known as prototype-oriented Class-conditional clustering transport (CLUST) for UDA. This method utilizes clustering objectives and deep prototype learning to improve UDA performance. CLUST aims to generate more diverse and reliable probabilistic outputs by maximizing informational entropy and ensuring semantic consistency. Specifically, CLUST reduces both class-conditional feature clustering transport costs and prototype clustering transport costs, fostering intra-domain feature aggregation and robust domain class structure alignment. Furthermore, CLUST maintains consistent probability predictions for same-class samples, preserving semantic structural consistency within the target data. Theoretically, we analyze the CLUST architecture concerning the target learning generalization error bound, confirming its theoretical soundness and robustness. Extensive experiments demonstrate CLUST's effectiveness across various challenging scenarios. The experimental results show that CLUST delivers state-of-the-art or comparable performance, highlighting its robustness and practicality across diverse UDA contexts.