One-Shot Averaging for Distributed TD(λ) Under Markov Sampling

马尔可夫采样下分布式TD(λ)的单次平均

阅读:1

Abstract

We consider a distributed setup for reinforcement learning, where each agent has a copy of the same Markov Decision Process but transitions are sampled from the corresponding Markov chain independently by each agent. We show that in this setting, we can achieve a linear speedup for TD(λ), a family of popular methods for policy evaluation, in the sense that N agents can evaluate a policy N times faster provided the target accuracy is small enough. Notably, this speedup is achieved by "one shot averaging," a procedure where the agents run TD(λ) with Markov sampling independently and only average their results after the final step. This significantly reduces the amount of communication required to achieve a linear speedup relative to previous work.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。