Machine learning research has achieved large performance gains on a wide range of tasks by expanding the learning target from mean rewards to entire probability distributions of rewards-an approach known as distributional reinforcement learning (RL)(1). The mesolimbic dopamine system is thought to underlie RL in the mammalian brain by updating a representation of mean value in the striatum(2), but little is known about whether, where and how neurons in this circuit encode information about higher-order moments of reward distributions(3). Here, to fill this gap, we used high-density probes (Neuropixels) to record striatal activity from mice performing a classical conditioning task in which reward mean, reward variance and stimulus identity were independently manipulated. In contrast to traditional RL accounts, we found robust evidence for abstract encoding of variance in the striatum. Chronic ablation of dopamine inputs disorganized these distributional representations in the striatum without interfering with mean value coding. Two-photon calcium imaging and optogenetics revealed that the two major classes of striatal medium spiny neurons-D1 and D2-contributed to this code by preferentially encoding the right and left tails of the reward distribution, respectively. We synthesize these findings into a new model of the striatum and mesolimbic dopamine that harnesses the opponency between D1 and D2 medium spiny neurons(4-9) to reap the computational benefits of distributional RL.
An opponent striatal circuit for distributional reinforcement learning.
用于分布式强化学习的对立纹状体回路
阅读:3
作者:Lowet Adam S, Zheng Qiao, Meng Melissa, Matias Sara, Drugowitsch Jan, Uchida Naoshige
| 期刊: | Nature | 影响因子: | 48.500 |
| 时间: | 2025 | 起止号: | 2025 Mar;639(8055):717-726 |
| doi: | 10.1038/s41586-024-08488-5 | ||
特别声明
1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。
2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。
3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。
4、投稿及合作请联系:info@biocloudy.com。
