Abstract
This study addresses the pressing problem of fake news in low-resource languages by proposing a novel neural network architecture based on attention, optimized for Turkish. The model effectively integrates FastText word embeddings, a Long Short-Term Memory (LSTM) layer, and a focused attention mechanism to capture the nuanced linguistic patterns and morphological intricacies of the Turkish language. Trained and tested on a manually verified dataset of 10,000 Turkish news articles, our system achieved a state-of-the-art accuracy of 92% and significantly outperformed strong baselines, such as a fine-tuned Turkish BERT model. A key advantage of our architecture is its computational efficiency, which demonstrates a 40% reduction in training time compared to BERT, making it highly suitable for real-world, resource-constrained applications. While the model shows strong cross-domain generalization, an in-depth error analysis reveals specific vulnerabilities to satirical content (62% accuracy) and sophisticated fabrications designed to mimic credible sources (68% accuracy). These limitations highlight important directions for future work. This research provides a validated, efficient, and interpretable framework for combating disinformation in Turkish, with promising implications for other morphologically rich, low-resource languages.