Abstract
Deep reinforcement learning (DRL), a vital branch of artificial intelligence, has shown great promise in mobile robot navigation within dynamic environments. However, existing studies mainly focus on simplified dynamic scenarios or the modeling of static environments, which results in trained models lacking sufficient generalization and adaptability when faced with real-world dynamic environments, particularly in handling complex task variations, dynamic obstacle interference, and multimodal data fusion. Addressing these gaps is essential for enhancing its real-time performance and versatility. Through a comparative analysis of classical DRL algorithms, this study highlights their advantages and limitations in handling real-time navigation tasks under dynamic environmental conditions. In particular, the paper systematically examines value-based, policy-based, and hybrid-based DRL methods, discussing their applicability to different navigation challenges. Additionally, by reviewing recent studies from 2021 to 2024, it identifies key trends in DRL-based navigation, revealing a strong focus on indoor environments while outdoor navigation and multi-robot collaboration remain underexplored. The analysis also highlights challenges in real-world deployment, particularly in sim-to-real transfer and sensor fusion. Based on these findings, this paper outlines future directions to enhance real-time adaptability, multimodal perception, and collaborative learning frameworks, providing theoretical and technical insights for advancing DRL in dynamic environments.