Back-translation effects on static and contextual word embeddings for topic classification embedding in classification tasks

回译效应对主题分类任务中静态和上下文词嵌入的影响

阅读:2

Abstract

This study investigates the impact of back-translation on topic classification, comparing its effects on static word vector representations (FastText) and contextual word embeddings (RoBERTa). Our objective was to determine whether back-translation improves classification performance across both types of embeddings. In experiments involving Logistic Regression, Support Vector Machine (SVM), Random Forest, and RNN-LSTM classifiers, we evaluated original datasets against those augmented with back-translated data in six languages. The results demonstrated that back-translation consistently enhanced the performance of classifiers using static word embeddings, with the F1-score increasing by up to 1.36% for Logistic Regression and 1.58% for SVM. Random Forest saw improvements of up to 2.80%, and RNN-LSTM by up to 1.46%; however, these gains were smaller in most languages and did not reach statistical significance. In contrast, the effect of back-translation on contextual embeddings from the RoBERTa model was negligible: no language showed a statistically significant F1-score improvement. Despite this, RoBERTa still delivered the highest absolute performance, suggesting that advanced contextual models are less reliant on external data augmentation techniques. These findings indicate that back-translation is especially beneficial for classification tasks in low-resource languages when using static word embeddings, but its utility is limited for modern context-aware models.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。