Abstract
Transformer-based large language models (LLMs) have recently demonstrated exceptional performance in a variety of linguistic tasks. LLMs primarily combine information across words in a sentence using the attention mechanism, implemented by "attention heads:" these components assign numerical weights linking different words in the input to one another, capturing different relationships between these words. Some attention heads automatically learn to assign weights that accurately encode meaningful linguistic features including, importantly, heads that appear specialized for identifying particular syntactic dependencies. Are syntactic computations in such heads "encapsulated", i.e., impenetrable to the influence of non-syntactic information? Such encapsulated computations would be strikingly different from those of the human mind, where non-syntactic information sources (e.g., semantics) influence parsing from the earliest moments of online processing, and where syntax and semantics are tightly linked in the mental lexicon. Here, we tested whether the activity of "syntax-specialized" attention heads in transformer-based LLMs is modulated by one type of semantic information: plausibility. In each of three LLMs (BERT, GPT-2, and Llama 2), we first identified attention heads specialized for various dependency types; in nearly all cases tested, we then found that implausible semantic information reduces attention between the words that constitute the dependency for which a head is specialized. These results demonstrate that, even in attention heads that are the best a-priori candidates for syntactic encapsulation, syntactic information is penetrable to semantics. These data are broadly consistent with the integration of syntax and semantics in human minds.