Information-Theoretic Intrinsic Motivation for Reinforcement Learning in Combinatorial Routing

组合路由中强化学习的信息论内在动机

阅读:1

Abstract

Intrinsic motivation provides a principled mechanism for driving exploration in reinforcement learning when external rewards are sparse or delayed. A central challenge, however, lies in defining meaningful novelty signals in high-dimensional and combinatorial state spaces, where observation-level density estimation and prediction-error heuristics often become unreliable. In this work, we propose an information-theoretic framework for intrinsically motivated reinforcement learning grounded in the Information Bottleneck principle. Our approach learns compact latent state representations by explicitly balancing the compression of observations and the preservation of predictive information about future state transitions. Within this bottlenecked latent space, intrinsic rewards are defined through information-theoretic quantities that characterize the novelty of state-action transitions in terms of mutual information, rather than raw observation dissimilarity. To enable scalable estimation in continuous and high-dimensional settings, we employ neural mutual information estimators that avoid explicit density modeling and contrastive objectives based on the construction of positive-negative pairs. We evaluate the proposed method on two representative combinatorial routing problems, the Travelling Salesman Problem and the Split Delivery Vehicle Routing Problem, formulated as Markov decision processes with sparse terminal rewards. These problems serve as controlled testbeds for studying exploration and representation learning under long-horizon decision making. Experimental results demonstrate that the proposed information bottleneck-driven intrinsic motivation improves exploration efficiency, training stability, and solution quality compared to standard reinforcement learning baselines.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。