Enhancing fake news detection with transformer-based deep learning: A multidisciplinary approach

利用基于Transformer的深度学习增强虚假新闻检测:一种多学科方法

阅读:1

Abstract

The widespread dissemination of fake news presents a critical challenge to the integrity of digital information and erodes public trust. This urgent problem necessitates the development of sophisticated and reliable automated detection mechanisms. This study addresses this gap by proposing a robust fake news detection framework centred on a transformer-based architecture. Our primary contribution is the application of the Bidirectional Encoder Representations from Transformers (BERT) model, uniquely enhanced with a progressive training methodology that allows the model to incrementally learn and refine its understanding of the linguistic nuances that differentiate factual reporting from fabricated content. The framework was rigorously trained and evaluated on the large-scale WELFake dataset, comprising 72,134 articles. Our findings demonstrate the model's exceptional performance, achieving an accuracy of 95.3%, an F1-score of 0.953, precision of 0.952, and recall of 0.954. Comparative analysis confirms that our approach significantly outperforms traditional machine learning classifiers and other standard transformer-based implementations, highlighting its superior ability to capture complex contextual dependencies. These results underscore the efficacy of our enhanced BERT framework as a powerful and scalable solution in the ongoing fight against digital misinformation.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。