Abstract
The dynamic behavior of the retrial queueing system following the incorporation of Deep Q-Network Reinforcement Learning in 6G mobile communication services is examined in this study. The proposed method lies in analyzing the DQN-RL agent's learning convergence by using the first- and second-order Markov chain method. By simulating the temporal evolution of reward sequences as Markov and second-order Markov chains, we can quantify convergence characteristics through mixing time analysis. To capture a wide operational landscape, a thorough simulation framework with 120 independent parameter combinations is created. The obtained results indicate that Markov chain analysis confirms 10 training episodes are more than sufficient for policy convergence, and in some cases, as few as 5 episodes allow the agent to enhance the mobile network performance while maintaining low energy consumption. To assess learning stability and system responsiveness, the mixing time of DQN RL rewards is calculated for every episode and configuration. A deeper understanding of the temporal dependencies in the reward process can be gained by incorporating higher-order Markov models. This paper concentrates on studying the learning convergence using an analysis of the Markov model's spectral gap properties as an indicator. The results provide a rigorous foundation for optimizing 6G queueing strategies under uncertainty by highlighting the sensitivity of DQN convergence to system parameters and retrial dynamics.