Abstract
Accurate readability assessment is essential for enhancing text accessibility, developing educational materials, and evaluating digital content quality. Nonetheless, unlike for English and other high-resource languages, there has been considerably less research in this area on Persian, with a few existing models employing only shallow linguistic or graph-based features. In this paper, we propose a hierarchical transformer-based model to provide readability classification for Persian. The proposed approach utilizes pretrained Persian language models to jointly encode sentence- and document-level contextual representations, eliminating the need for manual feature engineering. A newly curated and extended Persian readability corpus has also been developed to enable robust training and evaluation. The experimental results indicate that the proposed model outperforms existing feature-based and neural methods, yielding a 1.9% increase in overall classification accuracy and substantial improvements in both F1-score and cross-domain generalization. These results confirm that hierarchical contextual modeling works well for predicting readability. They also show that transformer architectures can help reduce the performance gap in automatic readability assessment between Persian and high-resource languages.