Symbol ungrounding: what the successes (and failures) of large language models reveal about human cognition

符号去根基化:大型语言模型的成功(和失败)揭示了人类认知的哪些方面

阅读:1

Abstract

Large language models can handle sophisticated natural language processing tasks. This raises the question of how their understanding of semantic meaning compares to that of human beings. Supporters of embodied cognition often point out that because these models are trained solely on text, their representations of semantic content are not grounded in sensorimotor experience. This paper contends that human cognition exhibits capabilities that fit with both the embodied and artificial intelligence approaches. Evidence suggests that semantic memory is partially grounded in sensorimotor systems and dependent on language-specific learning. From this perspective, large language models demonstrate the richness of language as a source of semantic information. They show how our experience with language might scaffold and extend our capacity to make sense of the world. In the context of an embodied mind, language provides access to a valuable form of ungrounded cognition.This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。