Abstract
The 'different-body/different-concepts hypothesis' central to some embodiment theories proposes that the sensory capacities of our bodies shape the cognitive and neural basis of our concepts. We tested this hypothesis by comparing behavioral semantic similarity judgments and neural signatures (fMRI) of 'visual' categories ('living things,' or animals, e.g., tiger, and light events, e.g., sparkle) across congenitally blind (n = 21) and sighted (n = 22) adults. Words referring to 'visual' entities/nouns and events/verbs (animals and light events) were compared to less vision-dependent categories from the same grammatical class (animal vs. place nouns, light vs. sound, mouth, and hand verbs). Within-category semantic similarity judgments about animals (e.g., sparrow vs. finch) were partially different across groups, consistent with the idea that sighted people rely on visually learned information to make such judgments about animals. However, robust neural specialization for living things in temporoparietal semantic networks, including in the precuneus, was observed in blind and sighted people alike. For light events, which are directly accessible only through vision, behavioral judgments were indistinguishable across groups. Neural responses to light events were also similar across groups: in both blind and sighted people, the left middle temporal gyrus (LMTG+) responded more to event concepts, including light events, compared to entity concepts. Multivariate patterns of neural activity in LMTG+ distinguished among different event types, including light events vs. other event types. In sum, we find that neural signatures of concepts previously attributed to visual experience do not require vision. Across a wide range of semantic types, conceptual representations develop independent of sensory experience.