Abstract
This paper proposes optimal sliding mode fault-tolerant control for multiple robotic manipulators in the presence of external disturbances and actuator faults. First, a quantitative prescribed performance control (QPPC) strategy is constructed, which relaxes the constraints on initial conditions while strictly restricting the trajectory within a preset range. Second, based on QPPC, adaptive gain integral terminal sliding mode control (AGITSMC) is designed to enhance the anti-interference capability of robotic manipulators in complex environments. Third, a critic-only neural network optimal dynamic programming (CNNODP) strategy is proposed to learn the optimal value function and control policy. This strategy fits nonlinearities solely through critic networks and uses residuals and historical samples from reinforcement learning to drive neural network updates, achieving optimal control with lower computational costs. Finally, the boundedness and stability of the system are proven via the Lyapunov stability theorem. Compared with existing sliding mode control methods, the proposed method reduces the maximum position error by up to 25% and the peak control torque by up to 16.5%, effectively improving the dynamic response accuracy and energy efficiency of the system.