Abstract
The goal of this study is to improve the quality and diversity of text paraphrase generation, a critical task in Natural Language Generation (NLG) that requires producing semantically equivalent sentences with varied structures and expressions. Existing approaches often fail to generate paraphrases that are both high-quality and diverse, limiting their applicability in tasks such as machine translation, dialogue systems, and automated content rewriting. To address this gap, we introduce two self-contrastive learning models designed to enhance paraphrase generation: the Contrastive Generative Adversarial Network (ContraGAN) for supervised learning and the Contrastive Model with Metrics (ContraMetrics) for unsupervised learning. ContraGAN leverages a learnable discriminator within an adversarial framework to refine the quality of generated paraphrases, while ContraMetrics incorporates multi-metric filtering and keyword-guided prompts to improve unsupervised generation diversity. Experiments on benchmark datasets demonstrate that both models achieve significant improvements over state-of-the-art methods. ContraGAN enhances semantic fidelity with a 0.46 gain in BERTScore and improves fluency with a 1.57 reduction in perplexity. In addition, ContraMetrics achieves gains of 0.37 and 3.34 in iBLEU and P-BLEU, respectively, reflecting greater diversity and lexical richness. These results validate the effectiveness of our models in addressing key challenges in paraphrase generation, offering practical solutions for diverse NLG applications.