Why large language models cannot possess consciousness: an integrated information theory perspective

为什么大型语言模型无法拥有意识:一种整合信息论视角

阅读:1

Abstract

BACKGROUND: The question of whether large language models (LLMs) possess consciousness has been increasingly debated. Integrated information theory (IIT) offers a quantitative framework for assessing consciousness through a measure of integrated information. METHODS: This study applied IIT principles to the architecture of transformer-based LLMs, focusing on causal integration, temporal persistence, and system irreducibility. Ablation experiments on Generative Pretrained Transformer 2 (GPT-2) were performed, selectively removing individual attention heads and measuring changes in perplexity as a behavioral proxy for integrated information to empirically approximate the measure of integrated information. RESULTS: The ablation study of a single attention head produced minimal or negative changes in perplexity in four out of five representative sentences, indicating redundancy or noise. Only one sentence revealed a significant increase in perplexity change (ΔPPL +11.29), reflecting a localized but nonessential contribution. A comparison with biological systems demonstrated that LLMs meet the IIT criterion of differentiation, but fail to meet the criteria of integration, causal closure, and temporal persistence. These findings confirm that LLMs are architecturally decomposable, lack persistent internal states, and do not sustain global causal irreducibility. Philosophical considerations, including Searle's Chinese Room argument, further support the idea that the linguistic fluency of LLMs arises from syntactic manipulation rather than semantic understanding. CONCLUSION: Current LLMs do not satisfy the structural and informational requirements of consciousness under IIT. Although capable of simulating intelligent language, LLMs remain unconscious systems with a negligible amount of integrated information, underscoring the distinction between linguistic competence and conscious experience.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。