Abstract
There has been significant interest in solving robot path planning problems using fuzzy logic-based methods. Recently, the Genetic Algorithm-based Hierarchical Interval Type-2 Fuzzy (GA-HIT2F) system has been introduced as a novel planner in this domain. However, this method lacks adaptability to changes in target points, and insufficient flexibility can lead to planning failures in local minimum traps, making it difficult to apply to complex scenarios. In this paper, we identify the limitations of the original GA-HIT2F approach and propose an enhanced Q-Learning-aided Adaptive Hierarchical Interval Type-2 Fuzzy (QL-HIT2F) algorithm for path planning. The proposed planner incorporates reinforcement learning to improve a robot's capability to avoid collisions with special obstacles. Additionally, the average obstacle orientation (AOO) is introduced to optimize the robot's angular adjustments. Two supplementary robot parameters are integrated into the reinforcement learning action space, along with fuzzy membership parameters. The training process also introduces the concepts of meta-map and sub-training. Simulation results from a series of path planning experiments validate the feasibility and effectiveness of the proposed QL-HIT2F approach.