Abstract
Contrastive graph clustering methods significantly enhance the clustering performance of graph data by leveraging multi-view augmentation and contrastive loss. In particular, Self-Supervised Graph Contrastive Learning (SS-GCL) has gained attention due to its lower dependence on labeled data. However, SS-GCL approaches often rely on pseudo-labeling to categorize samples into positive and negative pairs, which can lead to false negatives and degrade clustering performance. To address this issue, a prototype-driven contrastive graph clustering network is proposed. This network uses prototypes as data-driven cluster centers to form high-confidence sample sets while discarding boundary samples and aggregating the remaining samples into augmented embeddings. Additionally, a cross-view decoupled contrastive learning mechanism is designed, utilizing a mean squared error contrastive loss function based solely on positive samples. This mechanism performs alignment calculations across views for the augmented positive sample embeddings, effectively preventing the generation of false negative samples. Experimental results demonstrate that the proposed method outperforms current state-of-the-art baseline methods regarding accuracy and clustering effectiveness across multiple datasets.