Abstract
This study examines whether Large Language Models (LLMs) generate Chinese-to-Uyghur translations with syntactic patterns consistent with cognitive efficiency-motivated expectations. We compare translations produced by six mainstream LLMs with a benchmark generated by human experts and used for structural comparison. Syntactic complexity is quantified using Mean Dependency Distance (MDD), and we introduce a relative metric, Cognitive Divergence, as a structural proxy to capture sentence-level deviation from the human benchmark. Semantic comprehensibility is evaluated using COMET scores. The results indicate that LLM-generated texts show no statistically significant difference from the human benchmark in terms of macroscopic syntactic complexity, suggesting a form of surface-level syntactic similarity. However, absolute syntactic complexity alone does not exhibit a reliable association with semantic comprehensibility. In contrast, Cognitive Divergence shows a strong negative association with comprehensibility at the model level (r = -0.908, p = 0.012) and for most models at the sentence level. These findings suggest that relative alignment with human syntactic patterns may offer a useful explanatory perspective for understanding variation in the comprehensibility of LLM-generated translations, complementing existing evaluation approaches based on absolute complexity.