Humans learn generalizable representations through efficient coding

人类通过高效的编码来学习可泛化的表征。

阅读:3

Abstract

Reinforcement learning theory explains human behavior as driven by the goal of maximizing reward. Conventional approaches, however, offer limited insights into how people generalize from past experiences to new situations. Here, we propose refining the classical reinforcement learning framework by incorporating an efficient coding principle, which emphasizes maximizing reward using the simplest necessary representations. This refined framework predicts that intelligent agents, constrained by simpler representations, will inevitably: 1) distill environmental stimuli into fewer, abstract internal states, and 2) detect and utilize rewarding environmental features. Consequently, complex stimuli are mapped to compact representations, forming the foundation for generalization. We tested this idea in two experiments that examined human generalization. Our findings reveal that while conventional models fall short in generalization, models incorporating efficient coding achieve human-level performance. We argue that the classical RL objective, augmented with efficient coding, represents a more comprehensive computational framework for understanding human behavior in both learning and generalization.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。