Abstract
Traditional entity-relationship joint extraction models are typically designed to address generic domain data, which limits their effectiveness when applied to domain-specific applications such as manufacturing. This study presents the DeBERTa-based Potential Relationship Multi-Headed Self-Attention Joint Extraction Model (DPRM), which has been specifically designed to enhance the accuracy and efficiency of entity-relationship extraction in manufacturing knowledge graphs. The model is comprised of three core components: a semantic representation module, a relationship extraction and entity recognition module, and a global entity pairing module. In the semantic representation module, a DeBERTa encoder is employed to train the input sentences, thereby generating word embeddings. The capture of word dependencies is achieved through the utilization of Bi-GRU and Multi-Headed Self-Attention mechanisms, which serve to enhance the overall representation of the sentence. The relationship extraction and entity recognition module is responsible for identifying potential relationships within the sentences and integrating a relational gated mechanism to minimize the interference of irrelevant information during the entity recognition process. The global entity pairing module simplifies the model's architecture by extracting potential relationships and constructing a matrix of global pairing entity pairs based on fault-specific data. The efficacy of the proposed model is validated through experiments conducted on fault datasets. The results demonstrate that the DPRM achieves superior performance, with an F1 score that surpasses that of existing models, thereby highlighting its effectiveness in the fault domain.