Abstract
This paper reexamines a recent claim that Large Language Models lag behind humans in language comprehension on what were described as minimally complex statements. We argue that human performance was overestimated and LM performance, underestimated. Moreover, both people and lower-performing LMs are disproportionately challenged by queries involving potentially appropriate inferences, suggesting shared pragmatic sensitivity rather than model-specific deficits. Analysis of more sensitive log probabilities of Llama-2-70B demonstrate ceiling-level accuracy and pragmatic sensitivity. A separate group of LM grammaticality judgments previously characterized as incorrect are shown to correlate with human judgments, while certain reasoning models approximate idealized judgments when prompted to respond as an expert generative syntactician. Overall, the findings suggest that apparent deficits in LM performance may reflect task design, evaluation choices, and assumptions about human performance, rather than deficiencies in current models.