Abstract
Research indicates that the anterior temporal lobes (ATLs) in the left and right hemispheres constitute a supramodal hub for verbal and nonverbal semantic information. However, it remains unclear whether (1) left and right ATLs are respectively specialized for verbal and nonverbal semantic processing (functional specialization models) or (2) bilateral ATLs equally contribute to multimodal semantic processing (unitary "hub-and-spoke" models). The present study examined the question using repetitive transcranial magnetic stimulation (rTMS) with an implicit semantic priming paradigm minimizing top-down influences from cognitive control mechanisms. After receiving rTMS over the left ATL, right ATL or vertex, healthy human adults of either sex made real-versus-unreal judgments about visual words or objects each preceded by semantically related or unrelated primes. While a similar amount of semantic priming survived vertex stimulation for both words and objects, we found that left ATL stimulation, but not right ATL stimulation, eliminated semantic priming for words, suggesting that the verbal semantics is represented in the left ATL. In contrast, rTMS over the ATL eliminated semantic priming for objects irrespective of the side of stimulation. This latter finding suggests a twofold mechanism whereby semantic processing of visual objects depends on semantic computations in the right ATL as well as automatic activation of semantic word representations in the left ATL generating "verbal label" feedback. The observed functional asymmetry of the semantic ATL hub overall supports functional specialization models enriched by verbal label feedback accounts, unveiling that verbal semantic knowledge plays a guiding role in the semantic analysis of visual objects.