Abstract
Relation extraction is an important task for understanding relationships between entities, building knowledge graphs, and facilitating knowledge discovery. Pre-trained models can be fine-tuned for relation extraction if a substantial amount of labeled data is available. However, acquiring extensive labeled data is generally challenging. Semi-supervised techniques for low-resource relation extraction, such as self-training, offer a promising solution by leveraging both limited labeled data and vast unlabeled data to mitigate this challenge. Traditional self-training methods use a teacher-student framework, where a student is iteratively trained with pseudo-labels generated by the teacher. This may lead to noisy pseudo-labels and impact performance. To address this limitation, we introduce a new model called RE-AUM-LLM that generates high-quality pseudo-labels using self-training combined with Area Under the Margin (AUM) and Large Language Models (LLMs), such as Llama 3.1. Experimental results on two benchmark datasets show that the proposed approach achieves state-of-the-art results for low-resource relation extraction by comparison with several strong baselines. We will make the code publicly available to enable reproducibility and further research in this area.