Adaptive ensemble techniques leveraging BERT based models for multilingual hate speech detection in Korean and english

利用基于BERT模型的自适应集成技术进行韩语和英语多语言仇恨言论检测

阅读:1

Abstract

Online hate speech has become a major social problem owing to the rapid growth of Internet communities. Relying on anonymity, people use hateful or abusive language for groups who are different from them. As these terms vary by region and are reflected in local languages, it is important to build robust hate speech detection models for each local language. We propose an ensemble of several Bidirectional Encoder Representations from transformers (BERT)-based models to enhance English and Korean hate speech detection. Parallel Model Fusion (PMF) requires the results of BERT-based models and a final estimator called meta-learner. During each cross-validation, validation and testing results were used to train and test the PMF data. PMF test data are calculated using Majority Voting Integration or Weighted Probabilistic Averaging. Popular machine learning algorithms such as Random Forest, Logistic Regression, Gaussian Naïve Bayes, and Support Vector Machine are employed as meta-learners for PMF. The proposed model outperformed previous studies and the single-model approach in English and Korean, with accuracies of 85% and 89%, respectively, for each dataset. This study demonstrates improved automatic hate speech detection and encourage not only studies on English hate speech detection but also further work on non-English hate speech detection.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。