Abstract
Robot navigation in confined spaces has gained popularity in recent years, but offline planning assumes static obstacles, which limits its application to online path-planning. Several methods have been introduced to perform an efficient robot navigation process. However, various existing methods mainly depend on pre-defined maps and struggle in a dynamic environment. Also, diminishing the moving costs and detour percentages is important for real-world scenarios of robot navigation systems. Thus, this study proposes a novel perceptron-Q learning fusion (PQLF) model for Robot Navigation to address the aforementioned difficulties. The proposed model is a combination of perceptron learning and Q-learning for enhancing the robot navigation process. The robot uses the sensors to dynamically determine the distances of nearby, intermediate, and distant obstacles during local path-planning. These details are sent to the robot's PQLF Model-based navigation controller, which acts as an agent in a Markov Decision Process (MDP) and makes effective decisions making. Thus, it is possible to express the Dynamic Robot Navigation in a Confined Indoor Environment as an MDP. The simulation results show that the proposed work outperforms other existing methods by attaining a reduced moving cost of 1.1 and a detour percentage of 7.8%. This demonstrates the superiority of the proposed model in robot navigation systems.