Higher-Order Markov Model-Based Analysis of Reinforcement Learning in 6G Mobile Retrial Queueing Systems

基于高阶马尔可夫模型的6G移动重试排队系统中强化学习的分析

阅读:1

Abstract

The dynamic behavior of the retrial queueing system following the incorporation of Deep Q-Network Reinforcement Learning in 6G mobile communication services is examined in this study. The proposed method lies in analyzing the DQN-RL agent's learning convergence by using the first- and second-order Markov chain method. By simulating the temporal evolution of reward sequences as Markov and second-order Markov chains, we can quantify convergence characteristics through mixing time analysis. To capture a wide operational landscape, a thorough simulation framework with 120 independent parameter combinations is created. The obtained results indicate that Markov chain analysis confirms 10 training episodes are more than sufficient for policy convergence, and in some cases, as few as 5 episodes allow the agent to enhance the mobile network performance while maintaining low energy consumption. To assess learning stability and system responsiveness, the mixing time of DQN RL rewards is calculated for every episode and configuration. A deeper understanding of the temporal dependencies in the reward process can be gained by incorporating higher-order Markov models. This paper concentrates on studying the learning convergence using an analysis of the Markov model's spectral gap properties as an indicator. The results provide a rigorous foundation for optimizing 6G queueing strategies under uncertainty by highlighting the sensitivity of DQN convergence to system parameters and retrial dynamics.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。