Abstract
Reinforcement learning offers efficient solutions for optimizing complex decision-making tasks through continuous state-action-reward cycle with real-time adaptability. This work presents twin delayed deep deterministic (TD3) policy gradient RL based adaptive speed controller for the DC motor model while considering the impact of various uncertainties from dynamic environment into account. Various benchmark controller techniques are also utilized for similar objective in order to perform comparative analysis. Responses of each controller are plotted for both constant and variable desired speeds to evaluate their efficacy, robustness, and adaptability to uncertainties. Values of various types of error indices, including integral of squared error (ISE), integral of time-weighted absolute error (ITAE), integral of absolute error (IAE), integral of time-weighted squared error (ITSE), and their respective time-weighted variants are calculated, and tabulated for each type of speed controller for both test cases. Error indices analysis is also utilized to compare, and evaluate each controller's tracking precision and error minimization qualities in dynamic operating conditions for efficient speed regulation for the DC motor.