Abstract
INTRODUCTION: This paper investigates language impairments in schizophrenia (SZ) by analyzing the decision-making process of a transformer-based model in discriminating between texts produced by persons with SZ and persons without SZ. By doing so, we integrate insights from language-centered investigations with computational approaches. Using BERT-base-cased, we explore how linguistic markers of SZ can be identified through Natural Language Processing (NLP) techniques, with emphasis on improving performance reliability via dataset refinement and approaching interpretability of deep learning outputs via statistical analyses of thematic content. METHODS: We report the fine-tuning of a BERT model for text classification of 31,278 Reddit posts (15,639 SZ, 15,639 controls). The experiment evaluated the capacity of the model to distinguish language produced by individuals with SZ. RESULTS: The model achieved moderate performance (Accuracy = 0.6969; AUC = 0.78) and remained stable across hyperparameter configurations, indicating that foundation models such as BERT fit to data and, therefore, further performance gains are more likely to be derived from dataset refinement than from additional hyperparameter optimization. There were three key factors affecting the model's performance: text length, topic of discussion and vocabulary choices. Posts that were correctly classified tended to be significantly longer (p < 0.001, M = 37.30), focused on certain specific topics (e.g., r/Christianity), and contained more words related to mental health conditions, particularly those semantically related to SZ. DISCUSSION: These factors have also been reported in manual analyses of the impacts of SZ on language. These findings contribute to the accuracy of computational models aimed at working on linguistic classification tasks and underscore the value of carefully curated datasets, while demonstrating the viability of NLP methods in profiling SZ language.