Abstract
Accurate estimation of inverse kinematics and dynamics is essential for precise robotic manipulator control. Traditional analytical models, while effective for nominal conditions, often struggle to capture complex nonlinearities and unmodeled dynamics, necessitating compensatory controllers. However, these combined approaches can become computationally intensive and prone to redundancy. Conventional machine learning methods have shown promise in error compensation but typically demand large datasets and incur high computational costs, particularly when addressing high-dimensional state spaces. To overcome these limitations, this paper proposes a novel data-efficient framework based on Liquid Neural Networks (LNNs) for learning inverse kinematics and dynamics directly from simulation platform measurements. The inherent capability of LNN to model temporal dependencies and adapt continuously to evolving system dynamics enables accurate approximation of nonlinear manipulator behavior without reliance on extensive offline training. The proposed method is validated on two distinct robotic platforms-a UR5e manipulator with 6 DoF and a WAM manipulator with 7 DoF using both simulation and real-world trajectory data acquired through a targeted sampling strategy. Both simulation and experimental evaluations with previously unseen trajectories demonstrate that the ILNN-based approach achieves high-fidelity approximation of both inverse kinematics and inverse dynamics, delivering superior generalization and robustness compared to conventional analytical and data-driven methods. These results highlight the potential of LNNs as an efficient and scalable solution for real-time robot control in dynamic and uncertain environments.