Abstract
This study investigates the impact of back-translation on topic classification, comparing its effects on static word vector representations (FastText) and contextual word embeddings (RoBERTa). Our objective was to determine whether back-translation improves classification performance across both types of embeddings. In experiments involving Logistic Regression, Support Vector Machine (SVM), Random Forest, and RNN-LSTM classifiers, we evaluated original datasets against those augmented with back-translated data in six languages. The results demonstrated that back-translation consistently enhanced the performance of classifiers using static word embeddings, with the F1-score increasing by up to 1.36% for Logistic Regression and 1.58% for SVM. Random Forest saw improvements of up to 2.80%, and RNN-LSTM by up to 1.46%; however, these gains were smaller in most languages and did not reach statistical significance. In contrast, the effect of back-translation on contextual embeddings from the RoBERTa model was negligible: no language showed a statistically significant F1-score improvement. Despite this, RoBERTa still delivered the highest absolute performance, suggesting that advanced contextual models are less reliant on external data augmentation techniques. These findings indicate that back-translation is especially beneficial for classification tasks in low-resource languages when using static word embeddings, but its utility is limited for modern context-aware models.